ETTC 2015

Le programme préliminaire de la conférence ETTC 2015 est disponible sur le site ettc2015.org
L’Association Aéronautique et Astronautique de France (3AF) et la Société de l’Electricité, de l’Electronique et des Technologies de l’information et de la Communication (SEE) vous invitent à soumettre une communication ou à exposer à la

A propos

ETTC2015 wiIl provide the opportunity for scientists and engineers to report and discuss the latest developments in testing methods, especially in aeronautic and space domain.
ETTC 2015 fournira aux scientifiques et aux ingénieurs l’occasion de présenter et de discuter les derniers développements des méthodes d’essai, en particulier dans le monde aéronautique et spatial.

L’Association Aéronautique et Astronautique de France (3AF) et la Société de l’Electricité, de l’Electronique et des Technologies de l’information et de la Communication (SEE) organisent la nouvelle édition de la Conférence Européenne des Essais et Télémesures, ETTC 2015. Une attention particulière sera portée cette année sur  comment les technologies utilisées pour le « Big data » peuvent aider la communauté des essais. Une session spéciale sera organisée par l’ICTS (International Consortium for Telemetry Spectrum) et une par l’ETSC (European Telemetry Standardization Committee).

The 3AF and SEE societies organize the next ETTC2015 edition. This year, specific attention will be paid to: how “Big data” technology can help the tests community. A special session will be organised by ICTS (the International Consortium for Telemetry Spectrum) and one by ETSC (European Telemetry Standardization Committee).

Exposition / Exhibition

Les développements technologiques en cours et les matériels existants seront mis en valeur par l’exposition qui accompagne la conférence.
On-going technological developments and recent test equipments will be on display at the exhibition associated with the conference.

Secrétariat ETTC 2015 / ETTC 2015 Office

3AF – 10, avenue Edouard Belin – 31400 TOULOUSE – France
Tel: +33(0)5 62 17 52 80 – Fax: +33(0)5 62 17 52 81

Depuis 1985, ETTC est organisé conjointement par la 3AF et la SEE, en liaison avec l’Arbeitskreis Telemetrie EV en Allemagne et l’International Foundation for Telemetering aux Etats-Unis. ETTC est organisé, en France les années impaires en alternance avec le colloque ETC, en Allemagne, les années paires.
Since 1985, ETTC is jointly organised by the 3AF and SEE with Arbeitskreis Telemetrie EV in Germany and the International Foundation for Telemetering in the United State. ETTC is held in France each odd year, alternately with the German conference ETC the even year.
 

Sponsors et organisateurs

Documents

XLS
Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
ETTC 2015 Programme
 
 
 

ETTC 2015 PROGRAMME ISSUE 1 PLENARY SESSION - N° 1 – A350 Flight test campaign – Patrick DU CHE - AIRBUS – France - N° 2 – nEUROn Flight test campaign – Sylvain COURTOIS - DASSAULT AVIATION - France - N° 3 – SNCF Railway Rolling Stock Test Centre and Test in railway domain –Franck BOURGETEAU – Daniel CHAVANCE - SNCF – France - N° 4 – Last news from Rosetta – Philippe GAUDON – CNES – France - N° 5 – NASA’s Optical Communications Program for 2015 and Beyond – Donald CORNWELL – NASA – USA TECHNICAL PROGRAMME ORAL PRESENTATIONS SESSION N° 1 – Transducers, measurement devices – AIM2 research European Programme. Session chairman: Fritz Boden – DLR - Germany - N°1 - Advanced In-flight Measurement Techniques – Fritz Boden – DLR – Germany. - N° 2 - Recalibration of a Stereoscopic Camera System for in-flight Wing Deformation Measurements - Tania Kirmse - DLR Göttingen – Germany. - N° 3 - In-flight wing deformation measurements by image correlation technique on A350 - Benjamin Mouchet and Vincent Colman - AIRBUS Operation SAS – France - N° 4 - Rotating Camera System for Propeller and Rotor Blade Deformation Measurements - Fritz Boden, Boleslaw Stasicki and Marek Szypula – DLR – Germany - N° 5 - Development of Fibre Optic Strain and Pressure Instruments for Flight Test on an Aerobatic Light Aircraft - Nicholas Lawson, Ricardo Goncalves Correia, Ralph Tatam, Stephen James and Jim Gautrey - Cranfield University – United Kingdom - N° 6 - Recent achievements in Doppler lidars for aircraft certification - Claudine Besson, Beatrice Augere, Agnès Dolfi-Bouteyre, William Renard and Guillaume Canat – ONERA – France - N° 7 - NURMSYS - New Upstream Rotating Measurement System for gas turbine exhaust gases analysis - Betrand Carré, Sylvain Loumé and David Lalanne - AKIRA Technologies – France.. - SESSION N°2 – Test data acquisition and recording Chairman Part 1: Christian Herbepin – AIRBUS HELICOPTERS - France - N° 1 - FALCON 5X1 Flight Test Instrumentation – Jean-Pierre Rouby – DASSAULT AVIATION – France - N° 2 - High speed development of a temperature remote acquisition system to reduce instrumentation heat sink in an aircraft engine - Jean-Christophe Combier - AIRBUS Operation SAS – France. - N° 3 – Combined position, attitude measurement with precise time distribution for observation payload - Emmanuel Sicsik-Pare, Gilles Boime and John Fischer – Spectracom – France – USA - N° 4 - Optimizing Bandwidth in an Ethernet Telemetry Stream using a UHF Uplink - Moises Gonzalez-Martin and Pedro Rubio-Alvarez - AIRBUS DEFENCE AND SPACE – Spain - N° 5 - Flexible Switching for Flight Test Networks – Diamuird Collins - Curtiss-Wright Defense Solutions, Avionics & Electronics – Ireland - N° 6 - Evolving embedded electronics testing in HIL simulation and largescale test cells through sub-ns synchronization systems via Time Sensitive Networks in Ethernet- Kurt Veggeberg and Olivier Daurelles -National Instruments - United States, France 1    ETTC 2015 PROGRAMME ISSUE 1 Chairman Part 2: Diarmuid CORRY – Curtiss-Wright Controls Avionic & Electronic - Ireland - N° 7 - PTPv1 vs PTPv2: Characteristics, differences and time synchronization performances - Guillermo Martinez - Airbus Military – Spain - N° 8 - User Programmable FPGA I/O for Real-Time Systems – Combining User Friendliness, Performance, and Flexibility – Yannick Hildenbrand, Andreas Himmler and Jürgen Klahold - dSPACE GmbH – Germany - N° 9 – Guaranteed end-to-end latency through Ethernet - Øyvind Holmeide and Markus Schmitz - OnTime Networks - Norway, United States - N° 10 - Lessons for Onboard Data Storage from the worlds of Electronic Data Processing and Airborne Video Exploitation - Malcolm Weir - Ampex Data Systems Corporation – United States. - N° 11 - Cabin Comfort Flight Test Installation - Joel Galibert, Aymeric Plo and Stephane Garay - AIRBUS Operation SAS – France - N° 12 - The research on wireless sensor network for the aerocraft measurement system - Juan Lu, Ying Wang and Bingtai Liu - Beijing Institute of Aerospace Systems Engineering,- China SESSION N° 3: Big data and test data processing and analysis Chairman: Guy Destarac – 3AF- France - N° 1 - How to Harness the Value of Iot / Fast Data / Big Data and Data Analytics Technologies for the Tests Community - Frédéric Linder and Stéphane Biguet – Oracle – France - N° 2 - BigData applications for Telemetry -Greg Adamski and Gilles Kbidy – L-3-Communications Telemetry-West – United States - N° 3 - Case Study: Proposal of Architecture for Big Data Adoption - Luiz E. G. Vasconcelos, André Y. Kusumoto, Nelson P. O. Leite and Cristina M. A. Lopes- IPEV/ITA, ITA – Brazil - N° 4 - How Big Data technology brings added-value and agility during a flight campaign? - Laurent Peltiers and Jean-Marc Prangère - AIRBUS operations – France - N° 5 - Big Analog Data - Extracting Business Value from Test & Telemetry Data - Otmar Foehner, Robert Lee and Olivier Daurelles - National Instruments - United States, United Kingdom, France - N° 6 - Improving Test Cell Efficiency by Monitoring Measurements - Gouby Aurélie - Snecma – France - N° 7 – Processing Ethernet Flight Test Data with Open Source Tools – Paul Ferrill – Avionics Test and Analysis Corporation – United States SESSION n° 4 – ICTS (International Consortium for Telemetry Spectrum) Chairman: Jean-Claude GHNASSIA – 3AF – France - N°1 - Welcome and Introduction by ICTS Chair J.-C. Ghnassia - N°2 - Regional Reports: RI : J-C. Ghnassia RII: G.Mayer RIII: M.Ryan by J-C.Ghnassia - N°3 - “World Radiocommunication Conference 2015 (WRC-15) – Agenda Items Relative to Telemetry" G. Mayer - N°4 - “C-band for Airbus telemetry : status and improvement” L. Falga  - N°5 - “Eurocopter’s Conversion to C-band” tbc  - N°6 – Threats to aeronautical telemetry in USA : update #10. S. Hoshar  - N°7 - Conclusion and Closure J.-C. Ghnassia 2    ETTC 2015 PROGRAMME ISSUE 1 SESSION n° 5 – Telemetry frequency (spectrum management), modulation, telemetry systems. Chairman: Gilles FREAUD – Airbus – France - N° 1 : A new design to ground TM/TC communications for spacecraft launch campaign at Guiana Space Centre - Nicolas Hugues and Michel Thomas - CNES , ZDS – France - N° 2 : The entry into service of C-band Telemetry at Airbus Test Centre: first result and way of improvement - Luc Falga - AIRBUS Operations – France - N° 3 – Combining a Reed-Salomon block code with a blind equalizer: synchronization and bit error rate performance - Alexandre Skrzypczak, Gregory Blanc and Tangi Le Bournault - Zodiac Data Systems – France - N° 4 – Limitation of the 2 antennas problem for aircraft telemetry by using a blind equalizer. - Alexandre Skrzypczak, Gregory Blanc and Tangi Le Bournault - Zodiac Data Systems – France - N° 5 - A Gaussianization-based performance enhancement approach for coded digital PCM/FM - Guojiang Xia, Xinglai Wang and Kun Lan - Beijing Institute of Astronautical Systems Engineering – China - N° 6 - Real time C Band Link Budget Model Calculation - Francisco-M-Fernandez – Airbus Space and Defence - Spain SESSION n° 6 – Space Telemetry Chairman: Jean-Luc ISSLER - CNES – France - N° 1 - Rosetta-Philae RF link, from separation to hibernation - Clément Dudal, Céline Loisel, Emmanuel Robert, Miguel Angel Fernandez, Yves Richard and Gwénaël Guillois - CNES, Syrlinks – France - N° 2 - JASON3, a story of TT&C interference handling - Céline Loisel and Gérard Zaouche – CNES – France - N° 3 - Wavelet and source coding on Ariane 5 telemetry data - Didier Schott - Airbus Defence & Space – France - N° 4 – Cubesat communication CCSDS hardware in S and X band - Issler Jean-Luc and Lafabrie Philippe – CNES – France - N° 5 - Implementation of a high throughput LDPC decoder in space-based TT&C – Wen Kuang, Nan Xie and Xianglu Li - Inst. Of electronic engineering, china academy engineering physics – China - N° 6 - The Implement of IP-Based Telemetry System of Launch Vehicle - Feng Tieshan, Lan Kun and Zhao Weijun - Beijing Institute of Astronautical Systems Engineering - China SESSION n° 7 – MDL (Meta data language group) Chairman: Lee H. ECCLES – Boeing - USA Programme tbc . - Target of the group. - Detail of the actual situation - Discussion 3    ETTC 2015 PROGRAMME ISSUE 1 SESSION n°8 – ETSC (European Telemetry Standardization Committee) Chairman: Gerhard MAYER – GMV Consulting – Germany Welcome & Introduction (G. Mayer) 1. Committee Reports • ETSC Subcommitees SC-1:Spectrum & Frequency Management ( J.-C. Ghnassia, S. de Penna) SC-2: Data Acquisition & Processing (W. Lange, Ch. Herpepin) SC-3:Data Recording & Storage (B. Bagó, P. Morel, S.W. Lyons ) SC-4: Networked Telemetry (E. Schulze, Ch. Eder) • Telemetering Standards Coordination Committee(TSCC) (D. Corry) • Consultative Committee for Space Data Systems (CCSDS) (R. Ritter) 2. Short Presentation & Discussion Briefing on the “Working Draft IRIG 106, Chapter 7” (B. Bagó ) 3. New Business Membership standings, coming elections, preparation for the ETC 2016 Conclusions & Adjourn POSTER PRESENTATIONS SESSION n° P1 – TEST METHODS - N° 1 - Computational results for flight test points distribution in the flight envelope and dynamic relocation - Lina Mallozzi, Alessandro D’argenio, Pierluigi De Paolis and Giuseppe Schiano - Dipartimento di Ingegneria Aerospaziale - Università degli Studi di Napoli “Federico II” – Italy - N° 2 - High-resolution electro-acoustic transducer for dielectric characterization of outer space materials - Lucie Galloy-Gimenez, Laurent Berquez, Fulbert Baudoin and Denis Payan - LAPLACE, CNES – France - N° 3 - Non-contacting Methods with Lidar for Spacecraft Separation Ranging - Shengzhe Chen, Hui Feng and Yuzhi Feng - Beijing Institute of Aerospace Systems Engineering – China - N° 4 - How do you go about achieving your video recorder? (Chapter 2) - Pierrick Lamour and Loic Mauhourat – TDM – France - N° 5- SpaceWireless: Time-synchronized & reliable wireless sensor networks for Spacecraft - Damon Parsy – Beanair – Germany SESSION n° P2 – TEST TOOLS AND SIMULATION - N° 1 - LTM scalable to the tests - Sylvain Derlieu - AIRBUS OPERATIONS SAS – France - N° 2 - Optimized Automatic Calibration Tool. Application for Flight Test Programs – Enrique Torello, Jose Manuel Baena, Lorenzo Miranda and Pilar Vicaria - AIRBUS DEFENCE AND SPACE – Spain - N° 3 - Means driven by tests - Jerome Sartolou and Bruno Chaduteau – Nexeya – France - N° 4 - Design and implementation of LAN-based real-time simulation system of high frequency communication -Rui Song, Daquan Li, Guangming Zhou and Guojiang Xia - Beijing Institute of Astronautical Systems Engineering - China 4    ETTC 2015 PROGRAMME ISSUE 1 5    SESSION n° P3 – PROPAGATION, JAMMING AND ASSOCIATED MITIGATION - N° 1 - Characterization of the unavailability due to rain of an X band radar used for range safety at Kourou Space Center - Frédéric Lacoste, Jérémie Trilles and Clément Baron – CNES – France - N° 2 - Channel capacity estimation of stacked circularly polarized antennas suitable for drone applications - Ioannis Petropoulos, Jacques Sombrin, Nicolas Delhote and Cyrille Menudier - SigmaLim Labex, University of Limoges – France - N° 3 - Pattern-reconfigurable antenna design for telemetry and wireless communication systems - Gaojian Kang, Daquan Li and Xinglai Wang - Beijing institute of astronautical systems engineering - China

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

This zip file contains all ETTC 2015 communications and the final programme
 
no preview

Archive: Proceedings ETTC 2015.zip Length Date Time Name --------- ---------- ----- ---- 80547 2015-05-28 19:29 0 Programme iss 1.pdf 0 2015-06-07 10:27 __MACOSX/ 177 2015-05-28 19:29 __MACOSX/._0 Programme iss 1.pdf 311474 2015-05-26 07:57 1-1.pdf 450542 2015-05-07 12:21 1-2.pdf 177 2015-05-07 12:21 __MACOSX/._1-2.pdf 26311 2015-05-28 17:31 1-3.pdf 539065 2015-05-26 07:56 1-4.pdf 731466 2015-05-09 14:15 1-5.pdf 907111 2015-05-11 21:57 1-6.pdf 1489593 2015-05-01 16:46 1-7.pdf 557202 2015-05-25 09:02 2-1.pdf 316958 2015-05-05 18:59 2-10.pdf 213089 2015-05-28 17:48 2-11.pdf 345205 2015-05-28 17:47 2-12.pdf 12352 2015-05-28 17:38 2-2.pdf 604815 2015-05-07 12:23 2-3.pdf 143661 2015-05-18 08:54 2-4.pdf 588149 2015-04-30 18:23 2-5.pdf 253267 2015-05-28 17:39 2-6.pdf 263010 2015-05-01 16:37 2-7.pdf 917660 2015-05-25 10:17 2-8.pdf 12296 2015-05-28 18:09 2-9.pdf 12653 2015-05-28 18:14 3-1.pdf 1062817 2015-05-01 16:54 3-2.pdf 506649 2015-06-01 14:23 3-3mod.pdf 459056 2015-05-11 21:53 3-4.pdf 12399 2015-05-28 18:18 3-5.pdf 595049 2015-05-09 10:03 3-6.pdf 12172 2015-05-28 18:20 3-7.pdf 1183723 2015-04-30 09:32 5-1.pdf 12895 2015-05-28 18:25 5-2.pdf 354529 2015-05-28 17:24 5-3.pdf 329996 2015-05-28 18:27 5-4.pdf 390969 2015-05-09 14:34 5-5.pdf 12479 2015-05-28 18:29 5-6.pdf 492982 2015-05-01 17:48 6-1.pdf 580340 2015-04-30 19:31 6-2.pdf 276464 2015-05-28 18:41 6-3.pdf 13566 2015-05-28 18:31 6-4.pdf 12183 2015-05-28 18:44 6-5.pdf 294044 2015-04-30 10:04 6-6.pdf 714526 2015-05-01 17:45 P1-1.pdf 13263 2015-05-28 19:10 P1-2.pdf 177 2015-05-28 19:10 __MACOSX/._P1-2.pdf 12912 2015-05-28 19:11 P1-3.pdf 33043 2015-05-01 17:34 P1-4.pdf 392029 2015-06-01 14:23 P1-5mod.pdf 177 2015-06-01 14:23 __MACOSX/._P1-5mod.pdf 284436 2015-05-08 09:10 P2-1.pdf 132126 2015-05-09 14:44 P2-2.pdf 132124 2015-04-30 09:36 P2-3.pdf 12391 2015-05-28 19:14 P2-4.pdf 13175 2015-05-28 19:17 P3-1.pdf 486231 2015-04-30 19:33 P3-2.pdf 390969 2015-04-30 09:43 P3-3.pdf --------- ------- 17996671 56 files

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Advanced In-flight Measurement Techniques F. Boden DLR, Bunsenstraße 10, 37073 Göttingen, Germany, fritz.boden@dlr.de Abstract: Advanced optical measurement techniques are beneficial compared to classical sensor measurements in terms of non-intrusiveness, measurement time and areal measurement. Although those techniques (e.g. the Particle Image Velocimetry (PIV) or the Image Pattern Correlation Technique (IPCT)) are well established in wind-tunnel and laboratory applications, it is a very challenging task to make them easily applicable for flight testing. Therefore, about more or less nine years ago the first EC project “Advanced In-flight Measurement Techniques” (AIM) was launched within the EU FP6. Researchers and specialists proofed in general the feasibility of applying their non-intrusive measurement techniques to industrial flight testing. Later the follow up project AIM² (EU FP7 contract no. 266107) has been started in 2010 intended to improve the AIM techniques towards routinely application in flight testing. In the paper, the two AIM projects will be presented briefly and an overview of the applied measurement techniques will be given. Keywords: Flight Test, Instrumentation, Optical Measurement Techniques, AIM, PIV, IPCT, IRT, LIDAR, PSP, BOS, FBG 1. Introduction Making new advanced optical measurement techniques that have their roots in wind tunnel or laboratory research applicable to time and cost efficient flight testing seems to be a very long journey. Even if some of these non- intrusive measurement techniques are still standard methods in wind-tunnel testing it has to be demonstrated again, that they are able to deliver reliable measurement results under flight testing conditions. Although optical measurement techniques are beneficial compared to classical sensor measurements in terms of non- intrusiveness, measurement time and areal measurement, the latter techniques are often applied in flight testing for reasons of simplicity. Therefore, in 2006 the European project “Advanced In- flight Measurement Techniques” (AIM) was launched. Researchers and specialists proofed in general the feasibility of applying their non-intrusive measurement techniques to industrial flight testing. After the AIM project the follow up project AIM² has been started in 2010. It was running for 48 months and was intent to improve the AIM techniques towards routinely application in flight testing. Therefore, lots of challenges identified within AIM had to be addressed and basic application rules as well as tool boxes had to be developed. This paper will give you a brief overview on both AIM projects and shortly describe the optical measurement techniques such as BOS, FBG, IPCT, IRT, LIDAR, PIV and PSP. 2. The AIM - project By the 1st of November 2006 the European Specific Targeted Research Project (STReP) “AIM - Advanced In- Flight Measurement Techniques” was launched within the 6th European research framework programme FP6. Its duration was 42 months. The goal of the project was to show the applicability of highly sophisticated optical measurement techniques to industrial flight tests. To achieve this target, eleven partner organisations from aircraft industries, airport services and research organisations closely worked together within AIM. The project was split into seven work packages (WP) and further subdivided into several tasks: • WP0 – coordination, • WP1 – wing deformation studies, • WP2 – propeller deformation studies, • WP3 – helicopter studies, • WP4 – surface flow measurements, • WP5 – high lift flow structures, • WP6 – industrial flight testing. Figure 1: Structure of the AIM project ETTC 2015– European Test & Telemetry Conference In what follows, the main content of all these work packages is briefly presented. 2.1 WP0 – Coordination WP0 was the main work package for all project management aspects like the co-ordination of the work package activities, the management of any contractual, financial and administrative issues, as well as the communication with the EC. Furthermore it comprised the exploitation of results, the creation of a communication platform for the partners and an official website [1]. 2.2 WP1 – Wing Deformation Studies WP1 mainly contained the in-flight measurement of wing deformation by means of the Image Pattern Correlation Technique (IPCT). Digital image correlation methods have been further developed to apply them for wing deformation measurements. Flight tests have been conducted on a Fairchild Metro II [2] and on a Piaggio P.180 [3] [4]. 2.3 WP2 – Propeller Deformation Studies In WP2 propeller blade deformation measurements by means of IPCT have been realized on a Piaggio P.180 [5]. In addition, an assessment of the IPCT for propeller deformation measurements was performed by one aircraft manufacturer [6]. 2.4 WP3 – Helicopter Studies WP3 was the work package for all the helicopter measurements. IPCT has been applied to main rotor blade deformations on a Eurocopter EC 135 helicopter [7]. The blade tip vortices have been investigated by means of LIDAR [8], BOS and PIV [9] on an MBB Bo105. 2.5 WP4 – Surface Flow Measurements In WP4 PSP was applied to measure the surface pressure distribution on the pylon of the VFW 614 ATTAS [10]. In addition IRT was improved for in-flight temperature measurements [11]. 2.6 WP5 – High Lift Flow Structures The WP5 was the work package with the most challenging tasks, as it intended to use the wind tunnel measurement techniques PIV and BOS for the ground based measurement of wake vortices of a landing aircraft [12] and PIV for non-intrusive in-flight flow field measurements [13]. Within AIM, it was the first time a PIV flight test installation was flown. 2.7 WP6 – Industrial Flight Testing WP6 was intended to apply the promising techniques IPCT and IRT under real industrial boundary conditions. On the one hand, the IPCT has been applied to wing deformation measurements on an Airbus A380 [14]. Due to the enormous dimensions of the aircraft it was a quite challenging task. The IRT on the other hand was applied to engine exhaust temperature measurements on a Eurocopter EC225 Superpuma [11]. 3. The AIM² project By the 1st of October 2010 the EC funded project AIM² (Advanced In-flight Measurement Techniques 2) was launched as a continuation of the preceding project AIM. Whereas the first AIM project proved the principle feasibility of using modern optical wind tunnel measurement techniques for in-flight measurements, AIM² focused on developing reliable and easy to use dedicated measurement systems and on defining design and application rules for these new in-flight measurement techniques. The project was running for 48 months structured in progressive steps starting with basic studies on challenges discovered in the preceding project, leading to optimised measurement systems to be tested under research conditions and finally to be proven in an industrial environment. AIM² comprised four partners from aerospace industries, one SME, three research organisations and three universities with expertise in optical measurement techniques, flight testing and training. The project AIM² was structured into 6 main work packages (WP), which were: • WP1 – Management, • WP2 – Deformation Measurements on Wings and Control Surfaces, • WP3 – Deformation Measurements on Propeller Blades, • WP4 – Surface Flow Measurements, • WP5 – Flow Field Measurements, • WP6 – Tools and Demonstration. Figure 2: Structure of the AIM² project ETTC 2015– European Test & Telemetry Conference 3.1 WP1 – Management WP1 was the overall management work package to cope with all the topics relevant for co-ordinating the project, organising the general meetings and disseminate the gathered knowledge by creating a communication platform and a website [15]. Furthermore it was designated to take care on gender issues and financial statements. 3.2. WP2 – Deformation Measurements on Wings and Control Surfaces WP2 was the work package for the improvement of the IPCT and marker based optical deformation techniques for measuring wing and control surface deformations in flight. Flight tests have been performed on a Fairchild Metro II and an EVEKTOR VUT 100 Cobra. Furthermore data of ground vibration testing by means of a marker technique on an Airbus A340 have been evaluated. 3.3. WP3 – Deformation Measurements on Propeller Blades WP3 was intended for the improvement of the IPCT and marker based techniques for propeller deformation measurements. A rotating stereo camera was designed, built and successfully flight tested on the EVEKTOR VUT100 Cobra [16]. 3.4. WP4 – Surface Flow Measurements FBG and unsteady IRT were further developed within WP4. The FBG sensors were flight tested on a Scottish Aviation Bulldog [17] including several wind-tunnel and laboratory tests. IRT flight tests were performed on a PW- 6 glider [18]. 3.5. WP5 – Flow Field Measurements WP5 comprised further development to enhance the optical measurement techniques BOS and PIV for in- flight flow field measurements. A second in-flight PIV application was performed on a Dornier Do-228 aircraft [19]. Furthermore LIDAR was flight tested on a Piaggio P.180 in order to calibrate standard FTI for airspeed measurements [20]. 3.6. WP6 – Tools and Demonstration WP6 comprised the development of useful tools to perform optical measurements and the creation of an application matrix for “non-experienced” users to set up their own advanced in-flight measurements. The tools have been finally applied for the landing gear measurements on a Piaggio P.180 and wing deformation measurements on a PW-6 glider. To spread the gained knowledge within the flight testing community a dedicated AIM² Advanced Flight Testing Workshop took place in Rzeszów (PL) from 9th to 14th of September 2013. Furthermore a small handbook on the AIM² techniques has been published [21] providing basic information about the methods and their possible applications. 3. The AIM - techniques 3.1 BOS – Background Oriented Schlieren Method The Background Oriented Schlieren (BOS) method technique is an image based density measurement technique. It uses the deviation of light due to refractive index changes in density gradients [22] (e.g. in compressible flow regimes, non-uniform temperature fields, gas mixtures). A randomly patterned background is once recorded without density gradient (reference image) and later with the density gradient in the line of sight (measurement image). Cross correlation algorithms identify pattern shifts between both images and thus enable the visualisation of the density gradients, e.g. vortex core location, jet location (see Figure 3) or shock position. Figure 3: Example of BOS processing - cross correlation of reference image (a) and measurement image (b) with jet exhaust jet between camera and background results in the visualisation (c) of the exhaust jet 3.2 FBG – Fiber Bragg Gratings FBG comprises a periodic modulation of the refractive index of the core of an optical fibre by the application of an interference pattern. Light of a known spectrum is send through the fibre and partly reflected by the grating (Figure 4). If the shape of the grating changes, e.g. due to strain or temperature change, the reflected spectrum changes proportionally. Thus the strain on the fibre can be deduced from the change of the reflected spectrum. Within AIM² strain and pressure sensors based on FBG have been developed and flight tested [17], providing the advantages of an much easier and lighter installation of sensors compared to strain gauges or pressure taps. Figure 4: Basic principle of FBG methods 3.3 IPCT – Image Pattern Correlation Technique The IPCT is an optical shape and deformation measurement technique based on the correlation of images of the investigated object painted with an irregular dot pattern [23]. If two cameras in a stereoscopic arrangement are applied, direct 3D measurements of the objects shape, its movements and deformations can be ETTC 2015– European Test & Telemetry Conference performed (see Figure 5). Within AIM and AIM² the IPCT mainly was applied to wing deformation [2] [3] [4] and rotor [7] as well as propeller deformation measurements [5] [16]. Figure 5: Principle of IPCT 3.4 IRT – Infrared Thermography The Infrared Thermography (IRT) is based on the measurement of the infrared radiation from surfaces and allows a global determination and visualisation of the surface temperature distribution with high accuracy. In aerodynamic research (in wind tunnel and flight tests) the thermography is used for the investigations of the boundary layer. Due to the jump in the wall stress coefficient and therefore in the heat transfer coefficient at the laminar-turbulent transition it allows the detection and visualisation of the transition from laminar to turbulent flow as well as laminar separations [24] and in some cases also vortices. Figure 6: Example of IRT measurements on a glider (left - setup, right measurement result) 3.5 LIDAR – Light Detection and Ranging LIDAR is based on Doppler shift determination of a light wave obtained from a single frequency laser that is reflected on natural atmospheric aerosols (Figure 7). The frequency shift is proportional to the air velocity and is detected via an interferometer measuring the beat between the backscattered wave from aerosols and a reference wave from a local oscillator. The coherent mixing enables the recovery of the backscattered wave phase, containing the radial velocity information along the laser line of sight. If required, the true air speed in three axes can be derived from multi axis sensing. This can be performed using 3 beams or more or a scanning device. LIDAR is able to give the velocity with no in-flight calibration. It is primary information without bias and thus it can directly be used to calibrate e.g. FTI [20]. By scanning an area or volume also complex flow fields (e.g. vortices) can be analysed. Figure 7: Principle of LIDAR 3.6 PIV – Particle Image Velocimetry The particle image velocimetry (PIV) is an image based measurement technique for instantaneous flow velocity fields. Tracer particles in the measured flow are illuminated by two co-planar pulsed laser light sheets. The backscattered light from the particles is imaged by one or more cameras. The cross correlation of both particle images deliver a displacement vector field directly depicting the flow field topology. With the known time delay between the laser light pulses and the magnification of the recording system the velocity vector field and thus the velocity components can be measured. Figure 8 shows a sketch of the measurement setup for the AIM and AIM² in-flight PIV campaigns and an example result of the measurements. Figure 8: Sketch of the AIM and AIM² PIV setup (left) and an example result (right) 3.7 PSP – Pressure Sensitive Paint PSP is an optical pressure measurement technique based on the "oxygen quenching" called photochemical reaction. In the presence of oxygen molecules, luminescence intensity from excited dye molecules which are implemented in the pressure sensitive paint is influenced by energy transfer. As a result, the luminescence intensity and lifetime changes with oxygen concentration or air ETTC 2015– European Test & Telemetry Conference pressure. The change can be observed by using digital cameras. Within AIM the PSP was applied for in-flight pressure measurements on the pylon of an aircraft. Figure 9: Example in-flight PSP measurements - raw image of the PSP (right) and extracted pressure values (left) 5. Conclusion This paper has provided a brief overview on the contents of the past AIM and AIM² projects. It delivered a small preview to the achievements of the projects and should encourage the reader to take a deeper look into the presented developments of advanced in-flight measurement techniques, e.g. by reading the publications in the references. At the beginning of the first AIM project optical measurement techniques and their abilities have been more or less unknown in the flight test community. Within the AIM consortium the growing close cooperation between research organisations and aircraft industry led to the first demonstrations of the feasibility of the applicability of the optical methods PSP, PIV, IRT, LIDAR, BOS and IPCT for industrial flight testing. In the follow up project AIM² the techniques have been further developed in order to make them applicable to flight test much easier. Useful tools and an application guide have been created during the project, and in addition a new method – the FBG – was introduced. Furthermore the focus of the AIM² project was laying in the dissemination of the knowledge to the flight test community. Now, after the finalisation of the two projects, this development has to be continued. Although the optical methods become more and more established for in-flight measurements several further steps have to be done in order to make them routinely be applicable and reliable. Some new features of the used measurement techniques will still be valuable for a wide variety of experimental applications even long after the finalisation of the AIM² project. The results of the improvement of the measurement techniques have mainly been assessed by the industrial partners. The exploitation therefore was mainly done in an interaction between the industrial partners and the developing research organisations. To keep this fruitful collaboration alive, to gain more knowledge and fields of application the wonderful AIM and AIM² consortium has to be kept together e.g. in a kind of an “AIM community” growing in the future by more and more partners. 6. Acknowledgement The author acknowledge the valuable work performed from all partners within the fantastic AIM and AIM² consortia as well all supporters of the new optical measurement techniques. 6. References [1] http://aim.dlr.de [2] H. P. J. Veerman, H. Kannemans, H. W. Jentink: "Highly Accurate Aircraft In-Flight Wing Deformation Measurements Based on Image Correlation", In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [3] T. Wolf, C. Lanari, A. Torres, F. Boden: "IPCT Ground Vibration Measurements on a Small Aircraft", In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [4] C. Lanari, A. Torres, T. Weikert, F. Boden: "In-flight IPCT Wing Deformation Measurements on a SMALL Aircraft", In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [5] C. Lanari, B. Stasicki, F. Boden, A. Torres: "Image Based Propeller Deformation Measurements on the Piaggio P.180", In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [6] P. Ruzicka, J. Rydel, M. Josefik, F. Boden, C. Lanari: "Assessment of Propeller Deformation Measurement Techniques for Industrial Application", In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [7] F. Boden, C. Maucher: “Blade Deformation Measurements with IPCT on an EC 135 Helicopter Rotor.” In: Research Topics in Aerospace Advanced In- Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [8] B. Augere, C. Besson, A. Dolfi, D. Fleury, D. Goular, M. Valla: “1.5µm LIDAR for Helicopter Blade Tip Vortex Detection.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [9] K. Kindler, K. Mulleners, M. Raffel: “Towards In-flight Measurements of Helicopter Blade Tip Vortices.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [10] Y. Egami, C. Klein, U. Henne, K. de Groot, J. B. Meyer, C.-P. Krückeberg, F. Boden: “In-flight Application of Pressure Sensitive Paint.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [11] L. Girard: “Application of Infrared Technology to Helicopter Flight Testing.” In: Research Topics in Aerospace Advanced In-Flight Measurement ETTC 2015– European Test & Telemetry Conference Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [12] C. Politz, R. Geisler, S. Ranasinghe: “Ground Based Large Scale Wake Vortex Investigations by Means of Particle Image Velocimetry: A Feasibility Study.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [13] C. Politz, N. J. Lawson, R. Konrath, J. Agocs, A. Schröder: “Development of Particle Image Velocimetry for In-flight Flow Measurement.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [14] F. Boden, H. Jentink, C. Petit: “IPCT Wing Deformation Measurements on a Large Transport Aircraft.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [15] http://aim2.dlr.de [16] F. Boden, B. Stasicki, M. Szpula: “Rotating Camera System for Propeller and Rotor Blade Deformation Measurements.” European Test and Telemetry Conference 2015, Toulouse, France, 9th – 11th of June 2015. [17] N. Lawson, R. Goncalves Correia, R. Tatam, S. James, J. Gautrey: “Development of Fibre Optic Strain and Pressure Instruments for Flight Test on an Aerobatic Light Aircraft.” European Test and Telemetry Conference 2015, Toulouse, France, 9th – 11th of June 2015. [18] P. Rzucidło, G. Kopecki , A. Kucaba-Piętal, R. Smusz, M. Szewczyk, M. Szumski, K. de Groot: “Flight parameters measurement system for PW6 in flight boundary layer mapping”, 9th AIRTEC 2014 International congress, Frankfurt / Main, 28th – 30th of October 2014. [19] C. Politz, C. Roloff, F. Philipp, H. Ehlers, A. Schröder, R. Geisler: “Free flight boundary layer investigations by means of Particle Image Velocimetry.” 17th International Symposium on Application of Laser Techniques to Fluid Mechanics, Lisbon, Portugal, 07th -10th of July 2014. [20] C. Besson, B. Augere, A. Dolfi-Bouteyre, W. Renard, G. Canat: “Recent achievements in Doppler lidars for aircraft certification” European Test and Telemetry Conference 2015, Toulouse, France, 9th – 11th of June 2015. [21] Boden, F. (ed.): “AIM² Advanced Flight Testing Workshop - HANDBOOK of ADVANCED IN-FLIGHT MEASUREMENT TECHNIQUES”, BoD Books on Demand, Norderstedt, 2013. [22] T. Kirmse: “Background Oriented Schlieren (BOS).” In: AIM² Advanced Flight Testing Workshop - HANDBOOK of ADVANCED IN-FLIGHT MEASUREMENT TECHNIQUES BoD – Books on Demand, Norderstedt. [23] F. Boden, T. Kirmse, H. Jentink: “Image Pattern Correlation Technique (IPCT).” In: AIM² Advanced Flight Testing Workshop - HANDBOOK of ADVANCED IN-FLIGHT MEASUREMENT TECHNIQUES BoD – Books on Demand, Norderstedt. [24] K. de Groot: “Infrared Thermography (IRT).” In: AIM² Advanced Flight Testing Workshop - HANDBOOK of ADVANCED IN-FLIGHT MEASUREMENT TECHNIQUES BoD – Books on Demand, Norderstedt. 7. Glossary 3D : three dimensional AIM : Advance In-flight Measurement Techniques (EU project) AIM²: follow up project of AIM (see above) BOS : Background Oriented Schlieren EC : European Commission FBG : Fibre Bragg Grating FP6, FP7: 6th and 7th European Research Framework Programmes FTI : Flight Test Instrumentation IPCT : Image Pattern Correlation Technique IRT : Infrared Thermography LIDAR : Light Detection and Ranging PIV : Particle Image Velocimetry PSP : Pressure Sensitive Paint SME : Small and Medium Enterprise STReP : Specific Targeted Research Project WP : Work Package ETTC 2015– European Test & Telemetry Conference

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Recalibration of a Stereoscopic Camera System for In-Flight Wing Deformation Measurements T. Kirmse DLR Göttingen, Institute of Aerodynamic and Flow Technology, Bunsenstr. 10, 37073 Göttingen Abstract: A decalibration of a stereoscopic camera system by slight changes in the camera position and alignment affect the accuracy of the results directly. The paper describes criteria to assess the decalibration level of a stereo camera system. An approach for a recalibration and its limits are demonstrated for a dynamic wing deformation measurement performed on an Evektor Cobra VUT100 aeroplane by means of the Image Pattern Correlation Technique (IPCT). Keywords: stereo photogrammetry, digital image correlation, wing deformation measurement 1. Introduction The applicability of wing deformation measurements techniques based on stereoscopic photogrammetry like the Image Pattern Correlation Technique (IPCT) has already been successfully demonstrated for in-flight applications [1,2]. IPCT combines the principles of stereoscopic photogrammetry with image correlation methods to determine corresponding areas in a stereo image pair and delivers the 3D wing surface as result. Especially at dynamic flight manoeuvres varying multidirectional loads and the vibration level can affect the camera installation and induce slight changes in the camera position and alignment. Often an appropriate support and camera fixation cannot avoid the decalibration completely due to other constraints of the flight test integration like available space and maximal weight. Especially for applications in large transport aircrafts the distance between the stereo cameras can amount 2 m and more to ensure a sufficient accuracy. With increasing camera base distance the effort for the installation of the stereo cameras on a common stiff support increases significantly. The camera movements induce a decalibration of the measurement system and affect the accuracy of the results directly. Thus criteria must be found to assess the decalibration level of a stereo camera system to decide whether a correction is necessary or not to obtain reliable results. The triangulation error as a measure for the quality of the results can be caused by different sources. Because IPCT is delivering field data of a complete surface, the distribution of the triangulation error can be analysed to infer possible error sources. The paper describes the influence of different error sources to the triangulation error distribution. An approach for a recalibration to correct camera movements and its limitations are demonstrated for a dynamic wing deformation measurement performed on an Evektor Cobra VUT100 airplane by means of IPCT. 2. Image Pattern Correlation Technique The stereo IPCT combines the principles of stereo photogrammetry with digital image correlation methods. The image correlation is used to determine point correspondences in the stereo views of the cameras. Therefore structures of randomly distributed dots with a diameter of about 2-3 pixel in the image sensor are applied to the surface to be measured. The calibration of the cameras and the triangulation of the point correspondences to obtain the 3D coordinates of the surface finally are conforming to the methods of the standard stereo photogrammetry. The determination of the point correspondences based on the images only and is independent of the calibration parameters. The measurement principle is also called Digital Image Correlation (DIC) in the literature [3], but for applications in the field of aeronautics it is called IPCT mostly. 2.1 Camera calibration The used camera model is based on a pinhole camera extended by parameters to include radial distortions. The calibration procedure based on the work of Tsai [4] and Zhang [5]. The internal camera parameters describe the optics of the camera. They include the camera constant f which is nearly the focal length of the camera lenses, the pixel width, the pixel aspect ratio, the shear factor s, the radial distortion parameters of first and second order κ1 and κ2 and finally the pixel coordinates of the principal point [u0,v0]. The external camera parameter describe the position and the alignment of the cameras as relation between the camera coordinates [xc yc zc]T and the world coordinates [xw yw zw]T by a rotation matrix R and a translation vector t  by the equation: t z y x R z y x w w w c c c  +           ⋅=           [1] ETTC 2015– European Test & Telemetry Conference The projection centre of the camera defines the origin of the camera coordinate system whose z-axis is perpendicular to the image plane. 2.2 Triangulation and triangulation error Applying the calibration for every pixel position of a camera image the corresponding line of sight can be determined by defining a directional vector starting at the common point of the projection centre of the specific camera. The intersection of the lines of sight of both cameras for corresponding points delivers the 3D position of the point to be measured. In reality the line of sights are skew lines with no intersection. The point location is then estimated at the position of the shortest distance of the skew lines. The triangulation error can be express as the real distance of the skewed lines of sight or as disparity between the detected pixel coordinate of a point and the position of the re-projection of its measured 3D-coordinate to the camera sensors. The triangulation error is a measure for the quality of the measurement. It can be caused by: 1) Calibration errors (limits of the camera model, detection of the calibration grid, accuracy of calibration target) 2) Errors of the point correspondences (accuracy of the cross correlation or marker detection algorithm) 3) Decalibration caused by camera movements 4) Refraction index changes along the line of sight The portion of the different error sources to the absolute value of the triangulation error cannot be assigned clearly. Nevertheless the distribution of the triangulation error over the complete evaluated surface can be used to assess the dominating error source. For a good calibration the triangulation errors should be below 0.5 pixels, which can be checked by an evaluation of a reference surface recorded during the calibration process. Errors of the point correspondences cause local peaks in the triangulation error field. Due to the image correlation the outliers are often located at the edge of the evaluation area or at positions where the correlation pattern is disturbed, e.g. by markers. A decalibration caused by a camera movement will introduce a continuous offset of the triangulation error of some pixels with a small variance. Its variation is mainly a function of the distance of the 3D position to the cameras and the distance of the pixel position to the centre of the camera sensors. Depending on the specific application and their source the influence of refractive index changes can cause local errors, e.g. when some line of sights pass a shock, or global errors, caused among others. if the deformed shape of a window is changing the optics of the imaging system and hence the internal calibration parameters of the cameras. 2.3 Recalibration of the external camera parameters If the decalibration of the stereoscopic camera system is caused just by a camera movement and the internal parameters are not affected, it can be recalibrated by a correction of the external calibration parameters. A rotation matrix Rrecal and translation vector trecal is searched for which the sum of the triangulation error of all corresponding points becomes a minimum. Therefor the correction terms cR and ct  are introduced with corigrecal RRR *= [2] and corigrecal ttt  += [3] Because the rotation matrix R is an orthogonal matrix defined by three parameters e.g. the Euler angles, 6 parameters has to be optimized for a single camera. For laboratory tests or tests on the ground in general there is the possibility to use reference markers in the background of a defined position in the world coordinate system to determine the absolute correction of the position and alignment of both cameras. For in-flight application it cannot be distinguished clearly if a movement of the pattern seen by a camera is caused by a movement of the camera itself, by a deformation of the observed object or a combination of both. Thus the external parameters of one camera can only be corrected with respect to the second camera. Therefore it is assumed that only one camera must be corrected leading to a reduction of the overall number of parameters to be optimized. Using only a number of well distributed corresponding points but without the knowledge of their 3D-position is not sufficient to solve the problem explicitly, because there is no information about the scale of the measured area included. Thus additional constraints must be set to get realistic results and to improve the convergence of the minimisation problem. At least 2 dedicated markers are used to fix the scale of the measurement object. Therefore the change of their distance with respect to each other is limited strictly, where the reference distance is taken from a measurement of valid calibration, ideally obtained from recordings taken during the calibration procedure. Further constraints implemented are the limitation of the change of the camera basis (distance of the projection centres) and a maximum angular change of the optical axes of the camera to be recalibrated. The computation time of the recalibration depends strongly on the number of corresponding points used as input. Because IPCT delivers often thousands of data points for a measured surface it is necessary to reduce the number. The used nodes are selected randomly, whereas a minimum distance between the single nodes ensures a uniform distribution. Additionally the triangulation error of a selected point must be within the limits of the mean triangulation error ± its standard deviation over the complete field of view. This requirement ensures that no outliers are selected which could be caused e.g. by an locally incorrect correlation result 3. IPCT measurements on Cobra VUT100 The IPCT was applied to measure the wing deformation of a VUT100 Cobra, a four-seated single engine motor aircraft of 10.2 m span manufactured by Evektor. The two ETTC 2015– European Test & Telemetry Conference high speed cameras of type AOS S-EM with a maximum frame rate of 500 fps at full resolution of 1280 x 1024 pixels were installed on a customized camera support behind the front seats of the aircraft (figure 1). C-mount adapters were used to fix the objective lenses of a nominal focal length of 35 mm to the cameras. The random dot pattern was designed by means of a digital mock-up (DMU) to obtain an optimal dot diameter of 2-3 pixels in the camera images for the complete field of view. Accounting the decrease of the viewing angle toward the wing tip the dots had to be elongated with increasing spanwise position. Additional checkerboard- like markers were applied as reference points and initial points for the image correlation. Figure 2 shows a sample stereo image pair recorded on ground. 3.1 Accuracy of the setup Based on the specific calibration parameter set the local accuracy can be calculated for a defined uncertainty of the point correspondences. The accuracy estimation assumes a correct calibration. The left plot of the figure 3 depicts the estimated local accuracy ey in vertical direction of the Cobra measurement based on a surface determined from a ground recording taken during the calibration procedure. An uncertainty of 0.2 pixel for the point correspondences was used for the accuracy estimation. Due to the small stereo angle the accuracy ez in spanwise direction has higher values and increases with increasing distance to the cameras from 0.8 mm to 4 mm at the wing tip. Figure 2: Stereo image pair recorded during the ground test The local triangulation error of the pixel coordinate is shown in the right plot of figure 3. For the most part of the measurement area its level is below 0.5 pixels, which is a typical range for a good calibration. The mean triangulation error amounts 0.22 pixels over the complete surface with a standard deviation of 0.3 pixels. The structures seen in the contour plot could be caused by a deviation between the camera model and the properties of the real cameras. Figure 3: Accuracy estimation ey (left) and local triangulation error of the wing surface recorded during the calibration 3.2 Ground test results A deformation measurement was performed on ground to prove the accuracy of the IPCT results by a comparison with simple ruler measurements for some dedicated points. Figure 4 shows a picture of the setup. Three rulers were fixed on the bottom side of the wing along the main spar and an fourth ruler was fixed at the rear auxiliary spar at the wing tip. Different loads were attained by different tank levels (empty, half filled, full) and the application of an additional weight of 40 kg near the wing tip for the maximal load case. Figure 4: Setup of the wing deformation measurement on ground The load case of an empty tank was used as reference for zero deformation. The IPCT delivers the surface of the wing as direct result. For a comparison with the ruler measurements by Evektor the surface coordinates of the IPCT results were extracted along the main spar for every Figure 1: IPCT cameras installed in the cabin of the Cobra VUT100 ETTC 2015– European Test & Telemetry Conference load. The y-coordinates of the reference were subtracted from the values of the further load cases to obtain the deformation in y-direction. Figure 5 depicts the deformation versus the spanwise position. The IPCT results agree very well with the ruler measurements. Figure 5: Wing deformation with respect to empty tank load case along main spar Additionally the deformation extracted from the IPCT data of the ground recording taken during the calibration procedure was evaluated in the same way. Its results are plotted as black dash-dotted line in figure 5 and differ from the other measurement significantly. A positive deformation is detected meaning an upward wing movement. During the calibration the tank level was between half-filled and full. Even a strong gust could not explain an upward bend of this level. But there are several hours between the recordings of the ground test and the calibration images and meanwhile the aircraft was moved. The comparison of the triangulation error extracted at the main spar position in figure 6 indicates a decalibration of the camera system. The shape of the triangulation error along the main spar for the ground test cases agrees basically with the shape of surface recorded during the calibration but there is an offset of about 1 pixel. This is also confirmed by the mean triangulation errors and its standard deviation of the complete measured surface listed in table 1. There is a large offset of the mean triangulation errors whereas the change of its standard deviation compared to the calibration surface is marginal. Table 1: Triangulation errors of the ground test Triangulation error calib empty half full full full + 40 kg Mean value 0.22 1.22 1.41 1.27 1.40 Standard deviation 0.3 0.34 0.34 0.32 0.37 The deformation values are a relative measure with respect to a specific reference state. There was no further remarkable decalibration between the recording of the empty tank reference and the other wing load cases of the ground test. But using a reference of a different decalibration level leads to errors of the relative measure as well, because a change of the camera positions and alignment will cause a change of the absolute frame of reference. This must be taken into account for the evaluation of flight test points. Here the time lag between ground reference recording and in-flight measurement is high with a lot of alternations of the load the camera installation has to withstand. Figure 6: Triangulation error of measured surface coordinates along main spar 3.3 Flight test results The IPCT measurements were conducted at several static and dynamic flight conditions. For each flight test point 720 image pairs were recorded at a frame rate of 120 Hz. Here only the results of a parabolic flight are presented with a variation of the load factor between -0.5g and 2g over the measurement sequence. The triangulation error of the IPCT results is very high and varies strongly from 16 pixels to 6 pixels. Figure 7a shows the development of the load factor Nz. The time series of the triangulation error and the y-coordinate of a selected point near the wing tip triangulated with the original calibration parameters of the setup is shown in figure 7b. There is a clear link between the triangulation error and the y-position observable within the first 250 frames. Additionally the time series in figure 7c shows the mean value and the standard deviation of the triangulation error for the complete surfaces evaluated by IPCT. The curve of the mean triangulation error corresponds to the curve of the single point in figure 7b apart from an outlier at frame 121 for the single point which is levelled off by the averaging. The peaks of the standard deviation curve at some lower frame numbers indicate a higher number of outliers in these surface results. Nevertheless the standard deviation stays below 0.5 pixels for most frames suggesting that the triangulation error is reasoned mainly by a decalibration of the cameras. With a triangulation error of 6 pixels and more the decalibration cannot be neglected anymore. 4. Recalibration of the flight test data The recalibration procedure described in chapter 2.3 was applied to the measurement sequence of the parabolic flight. The external camera parameters of the left camera were corrected by a minimisation of the triangulation errors. In this case the distance of the skew lines of sight of the corresponding points was used as measure for the ETTC 2015– European Test & Telemetry Conference triangulation error. The change of the distance between the cameras ΔB was limited to 18 mm and the change of the view angle was limited to 0.5°. Furthermore the allowable change of the marker distance between four markers close to the wing root was restricted to 0.35 mm. Their positions measured during the calibration delivered the reference distances of correct calibration parameters. To reduce the computation time, 130 well distributed nodes were selected from the thousands of point correspondences. The parallelised recalibration process took 1.32 CPU hours on Intel Core i7-3770 processor for the complete sequence of 720 frames. Figure 7: Time series of parabolic flight Figure 8 shows the results of the recalibration by means of the time series of several measures. The plot of the residuum of the cost function in figure 8c demonstrates that the minimisation was successful for the most frames. This is approved by the mean triangulation error and its standard deviation depicted in figure 8d. For the most frames the mean triangulation error is now below 0.2 pixels and even the standard deviation could be decreased compared to the original results seen in figure 7c. The recalibration failed only for 7 frames, which can be clearly identified by the peaks in figure 8c. This is less than 1% of the data points. In the upper time series of figure 8a the absolute y- coordinate of a selected surface point is shown for the original calibration (red) and the recalibration (black). The difference between the absolute values amounts up to 7 mm. The recalibration cannot reproduce the real absolute coordinate system. Hence a reliable absolute deformation with respect to a ground reference of a valid original calibration cannot be derived. But relative measures can be determined by defining the deformation of a part of the wing in the field of view to be zero. Therefore a rigid body transformation (RBT) was applied to the surface results of the flight test which maps the 6 markers close to the wing root to their position at the ground reference. For the area of the six markers the deformation is defined to be zero. Figure 8b compares the y-coordinate after the application of the RBT. Now the difference is decreased, but nevertheless it amounts up to 2 mm. In figure 8a the maximal differences occur at the beginning of the time series, which is the part of the maximal triangulation error for the original calibration (compare figure 7c), whereas the differences for the mapped coordinate system are vanished in this area. Looking for the change of the camera position and alignment indicates a different kind of the camera movement. In figure 8e the change of the camera base distance ΔB and the angular change Δαcams of the optical axis and the x-axis of the camera sensor is shown. In the first part of the time series the calibration was corrected by a rotation of the camera around the optical axis expressed by the higher values of Δαcams for the sensor x-axis. For the frames of failed recalibration the values of ΔB reached the constraint of the minimisation process of 18 mm. Figure 8: Time series of recalibration results of the parabolic flight 5. Conclusion The triangulation error is a measure for the quality of the results of a stereoscopic photogrammetry measurement. The distribution of the triangulation error over the complete field of view can be used to assess, which kind of error sources contributing to the triangulation error mainly. The IPCT was applied to measure the wing deformation at a VUT100 Cobra aeroplane. The principle applicability of the method was proven by a ground test and the IPCT results agreed very well to the results of a standard measurement method. A slight decalibration of the cameras was indicated in the IPCT evaluation on the ground but caused no significant error in the results. Here the unloaded reference recordings were affected by the decalibration in the same order of magnitude as the loaded measurement recordings. This was not the case for the in-flight measurements, were the mean triangulation error varied even within a measurement sequence very strongly. The camera system can be recalibrated by correcting the external camera calibration parameters by a minimisation ETTC 2015– European Test & Telemetry Conference of the triangulation error. But if no fixed reference points are captured in the field of view, because of an overall deformation and moving background, alignment of the cameras can only be corrected with respect to each other. The absolute frame of reference cannot be restored, which must be taken into account for an analysis of the results. A recalibration of the external camera parameters of one camera was demonstrated on a wing deformation measurement at the Cobra VUT100 successfully for a parabolic flight manoeuvre. The mean triangulation error could be decreased even below the level of the original valid calibration. The application of the recalibrated camera parameters to the corresponding points changed the position of the surface points remarkably. The displacements were above the accuracy estimation assuming a correct calibration. The recalibration can only correct the position and alignment of the cameras with respect to each other but not their absolute position thus only relative deformations can be determined finally. Mapping the coordinates by defining a zone of zero deformation in the field of view reduces the differences between the results using the original calibration and the recalibration, but for the presented case it is not sufficient to reach the estimated accuracy, thus the recalibration is necessary to obtain reliable results. Since the determination of the point correspondences by image correlation is independent of the camera calibration the recalibration can be implemented easily in the evaluation process. 6. Acknowledgement The measurement was part of the project ‘Advanced In- Flight Measurement Techniques 2’ (AIM²) funded by the EC within the 7th Framework Programme for Research of the EU (Contract number 266107). The author wants to thank the team of EVEKTOR for the preparation and conduction of the flight test and provision of the aircraft data and the NLR for providing and operating the camera system and the delivery of the image raw data. 6. References [1] F. Boden, N. Lawson, H. Jentink, J, Kompenhans (ed.): "Advanced In-Flight Measurement Techniques ", Springer, 2013. [2] Meyer R., Kirmse T., Boden F.: “Optical In-Flight Wing Deformation Measurements with the Image Pattern Correlation Technique” In: New Results in Numerical and Experimental Fluid Mechanics IX Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 124. pp 545-553,Springer, 2014 [3] Sutton M., Orteu J.J., Schreier H.:“Image Correlation for Shape, Motion and Deformation Measurements“, Springer 2009 [4] Tsai, R.Y.: "A Versatile Camera Calibration technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses", IEEE Journal of Robotics and Automation 3 (4), 1987. [5] Zhang Z.: "Flexible Camera Calibration By Viewing a Plane From Unknown Orientations", International Conference on Computer Vision (ICCV’99), 666-673, September 1999. ETTC 2015– European Test & Telemetry Conference

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

N° 3 - In-flight wing deformation measurements by image correlation technique on A350 -Benjamin Mouchet and Vincent Colman - AIRBUS Operation SAS – France The perfect knowledge of the shape of an aircraft wing is a key element for an aircraft manufacturer to validate its models. The classical technique used in flight tests is the photogrammetry based on cameras, flashes and reflective targets, directly set on the wing skin. Despite the good results it provides, the targets are aerodynamically intrusive and prevent from flexibility in the flight test campaign. An innovative in-flight wing deformation measurement technique, called Image Pattern Correlation Technique (IPCT) is a valuable alternative. The technology, developed by the DLR, is based on a stereoscopic method with stickers making specific patterns on the wing and a complex data post-processing. Partially applied on the A380, this technique was successfully applied to a larger scale on the A350- 900 during its certification campaign. The installation, method and results are presented 

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Rotating Camera System for Propeller and Rotor Blade Deformation Measurements F: Boden, B. Stasicki, M. Szypuła DLR, Bunsenstraße 10, 37073 Göttingen, Germany, fritz.boden@dlr.de Abstract: Within the EU project AIM² a rotating stereoscopic camera system was designed, build and successfully flight tested in order to apply the non- intrusive Image Pattern Correlation Technique (IPCT) to 360° propeller deformation measurements. The complete system was affixed to the axis of the aircraft engine rotating together with the propeller at its full rotational speed. It enabled the direct measurement of the propeller blades shape as well as its local pitch angle under real operating conditions. Although the system was exposed to extreme vibration and centrifugal forces it delivered good images and demonstrated the applicability of the method to flight testing. In the paper, this highly sophisticated rotary camera system and the measurement technique IPCT is described. Some results of the first flight tests are given and the next steps for the future applications are introduced. Keywords: Advanced In-flight Measurement Techniques, IPCT, rotating camera, blade deformation 1. Introduction The blades deformation directly affects the efficiency of an aircraft propeller. Therefore, the measurement of this parameter is of high interest. A standard approach for measuring the bending and torsion of a propeller blade is the application of strain gauges at several single locations on the blades surface. Due to the size of the sensors as well as the required cabling, the number of measurement locations on the rotating propeller is very limited. In addition the application of the sensors on the blade can affect negatively its mechanical and aerodynamic parameters. Furthermore a direct measurement of the shape and location of the blade is not possible. The optical measurement method IPCT (Image Pattern Correlation Technique) [1] enables non-intrusive shape and deformation measurements of such structures. Some years ago, this method has been applied to propeller deformation [2] in flight and rotor blade deformation [3] on ground. Both measurements were performed with a non-rotating camera system in the fix frame observing the blade passing the cameras field of view. They demonstrated the feasibility of the application of IPCT for in-flight propeller and rotor deformation measurements. Other optical measurement techniques e.g. like 3D DIC [4] and Moiré techniques [5] have also successfully been applied to rotor measurements. But all these activities have been performed with cameras outside the rotating frame. Since the knowledge of the blades deformation and movement is important for the hole revolution it is important to observe the complete rotor disc. For small scales or on test benches [6] this can be done by a stationary camera system with sufficient resolution. In case of full-scale propeller deformation measurements in flight it is nearly impossible to install such a camera system in a proper way. Therefore within the EC funded project AIM² project the development of a rotating camera system was launched. In what follows the IPCT as well as the rotating camera system itself are presented briefly. The successful in-flight application of the system and some results of the image post-processing can be found in the latter part of this paper. 2. Measurement Technique and Required Installations 2.1 IPCT The stereoscopic Image Pattern Correlation Technique (IPCT) is an optical method based on digital image correlation (DIC) and 3D reconstruction by means of triangulation. At least two cameras are observing the patterned measurement object from slightly different viewing angles (see Figure 1). The IPCT processing software dewarps the camera images and identifies corresponding pattern regions in both images. To obtain the 3D surface and its orientation, the resulting camera coordinates are finally triangulated using a 3D camera calibration based on the recordings of a well-known calibration target. A comparison of the measured surfaces resulting from different load cases (e.g. non-rotating, ground idle and full thrust) finally delivers the deformation of the investigated propeller or rotor blade. Figure 1: Sketch of the IPCT processing ETTC 2015– European Test & Telemetry Conference In order to separate the deformation from the solid body movements of the measurement area occurring due to the deformation of the not observed propeller part and the change of the blade pitch angle, additional markers (e.g.checker board markers) are applied to the IPCT pattern. In principal obtained 3D surfaces are “stitched together” by using markers close to the hub. The remaining differences between the surfaces yield the local deformation. In addition, the markers in processing are also used for getting a first initial image displacement before the dot pattern is correlated. Furthermore those markers can be used to recalibrate the cameras if vibrations of the camera support lead to small misalignments. Usually, the measurement inaccuracy of IPCT is in the order of 0.2 pixels on the camera sensor. Figure 2 shows the estimation of the inaccuracy of the rotating camera according to [7] and for three different focal lengths (f = 8 mm, 12.5 mm and 16 mm). The obtained inaccuracy perpendicular to the blade surface is dz. The inaccuracy increases towards the blade tip and for smaller focal lengths. This is due to the decrease of the resolution of mm per pixel with increasing distance and decreasing focal length. The estimated inaccuracy of the rotating camera applied to the test (f = 8 mm) thus is in the range of 0.12 to 0.68 mm perpendicular to the propeller blade. Figure 2: Estimation of the inaccuracy dz perpendicular to the blade surface of the presented measurement setup for different focal lengths f Indeed, the given local inaccuracy is higher than that for conventional methods (e.g. strain gauges down to 0.01‰ to IPCT down to 1‰ of the measurement area), but the big advantage of IPCT is its non-intrusiveness and its spatial information (field measurement). Furthermore the shape of the investigated surface and its 3D position are measured directly and do not have to be derived from other parameters (e.g. such as from the strain and the location of the strain gauges by means of a structural model). 2.2 Rotating Camera System For the application of the 3D IPCT processing images of the propeller blade taken by at least two cameras in a stereoscopic arrangement are required. Therefore within AIM², DLR and Polish HARDsoft company developed a novel rotating camera system enabling the observation of one propeller blade for its whole revolution during operation. A sketch of this device is shown in Figure 3. It is mounted on the propeller hub (8) and consists of several coaxial stages (1 to 7) co-rotating with the propeller at its full speed. The camera stage contains two CMOS camera sensors (1, 2) watching the investigated blade in a stereoscopic arrangement. Each of the sensors has a resolution of 1,280 x 1,024 pixels. The next stage (3) contains the image acquisition board, a GPS module and a phase shifter circuit. The reflection sensor (11) on the hub (8), delivering one pulse per revolution, is connected to this phase shifter to obtain the propeller rpm. With this information, triggering of the camera exposure at any dedicated propeller phase angle is enabled. The phase shift can either be set to a constant value for recording the propeller at the same phase, i.e. the blade position for each revolution, independent of the propeller speed, or with a phase shift change for each revolution by a given increment, hence providing scans of a phase angle range (also for a complete 360° revolution). In the further stage (4) an embedded computer is implemented to control the complete system and store the images on the removable drive of the SSD type which is located in the stage (5). The self-sustained system is powered by four rechargeable LiFePO4 batteries (6) with a total voltage of 14.8 V and a capacity of 3,500 mAh, enabling an operation of approximately one hour with image recording at 45 image pairs per second. During the flight test operation, the system is exposed to severe vibrations (in our application up to 20 g in a range of 20 to 150 Hz) and significant centrifugal forces at 2,700 rpm. To avoid damage to the electronics due to these high loads, all printed circuit boards were firmly fixed in a rigid metal frame preventing their stretching. To control the system whilst the propeller is running, a WLAN module is included enabling a “remote desktop” connection with the cabin for setting the recording and camera parameters and taking a quick look on the acquired images. Figure 3: Sketch of the rotating camera system ETTC 2015– European Test & Telemetry Conference Figure 4: IPCT pattern with checker board markers (a - pattern design with progressive dot size, b - simulated view in digital mock-up, c - camera image of the painted blade) 3. Measurements Since the propeller deformation measurement task within the AIM² project has been performed in collaboration with the Czech aircraft manufacturer EVEKTOR and the Czech propeller manufacturer AVIA PROPELLER, the first measurement object for the rotating camera was the propeller of the Evektor VUT100 Cobra airplane. To apply the IPCT on the propeller, the investigated blade was painted with a dedicated IPCT pattern. For the reason of an optimal imaging on the camera sensors, the pattern has been designed with a progressing dot and marker size towards the blade tip (see Figure 4a). After the verification of the pattern design in a digital mock-up (see Figure 4b) a paint mask has been manufactured in order to spray the white dots and markers on the black primed propeller blade. Figure 4c shows an example image of the painted blade taken by one of the camera sensors. All dots and markers nearly have the same size providing an optimal exploitation of the sensor resolution. The final painted blade and the rotating camera system in the next step have been mounted on the propeller and the complete installation has been balanced before it was mounted on the airplane. After the installation on the airplane, the camera system had to be calibrated. This was achieved by placing a checker board plate as calibration target in the cameras field of view (see Figure 5). Figure 5: Calibration of the rotating camera system Figure 6: EVKETOR VUT100 Cobra during flight test with rotating camera From the images recorded during calibration, the IPCT software directly calculates the extrinsic (e.g. position, orientation) and intrinsic (e.g. focal length, distortions) parameters of the cameras. Furthermore the position and orientation of the calibration target in the first image pair defines the measurement coordinate system of the rotating camera. Once the system had been calibrated and was operating properly the first flight tests have been performed. In total three measurement flights and one ground run have been conducted. During the tests the camera system was operating autonomously with one hour continuous recording with a maximum frame rate of 45 image pairs per second (equivalent to one image per revolution). In Figure 6 the Cobra flying with the mounted rotating camera is shown. For the first tests, the camera had no aerodynamic cover. In order to optimize the aerodynamics a suitable spinner could be used in the future. Figure 7: Example image pairs recorded at different phase angles during revolution ETTC 2015– European Test & Telemetry Conference Figure 7 shows some example recordings of the rotating camera for different propeller phase angles. The blurred background gives an impression of the high rotational speed of around 2,700 rpm. The images provide enough contrast for the IPCT processing even with the massive change of background illumination. The pattern itself is depicted with sufficient sharpness. 4. Results Next to the recording the IPCT processing of the image data was performed by means of in-house developed software containing three major steps: calibration, marker detection and 3D surface calculation. In Figure 8 an example of the 3D surfaces obtained from the stereoscopic image pairs of the rotating camera is depicted. The X - co-ordinate corresponds to the blades span direction, whilst the Y - co-ordinate incidences with the chord of the blade at near the root section (in vicinity to the extracted profile Figure 8b). The overall span-wise length of the surface is 681 mm being in agreement with the dimensions of the observed area on the blade. The overall shape of the blade surface is well reconstructed. Only at the leading edge close to the root, where the pattern is too coarse for the strong curvature, the reconstruction is not working well. The markers are also clearly visible on the surface (e.g. Figure 8a) because at those positions no pattern is applied and thus the algorithm produces a local discontinuity. Figure 8b, c and d show blade profiles extracted from the surface in chord-wise direction (Y - direction) and for three different span-wise locations. They clearly show the change of the curvature and local pitch angle - strong curvature and zero pitch angle at the root, less curvature and lower pitch angle at the tip. Doing this chord-wise extraction for the same span-wise locations for different flight conditions and propeller settings, the change of the propeller pitch as well as the change of the blades twist can be directly measured. Figure 8: Example resulting surface (a - detail of the surface; b, c, d – profiles at different span-wise sections) Figure 9: Examples of chord-wise slices at X = 400 mm ( = Figure 8c ) extracted from surfaces of different measurement points and normalized to the reference shape at this location, Δθ is the change in pitch angle compared to the reference state Figure 9 shows some example curves extracted at X = 400 mm ( = position of Figure 8c) for different flight measurement points. For a better comparison all slices had been normalized to the reference state at stand- still. The change Δθ of the local pitch angle with respect to the reference state can be directly read out of the diagram by determining the slope of the curves. To obtain a bending line of the blade the extraction of data can be performed in span-wise direction for different load cases. Figure 10 shows such an extraction for a line at constant Y = -40 mm and normalized to the reference state at stand-still. The given value of ΔZ is the difference of the measured Z - coordinate during flight compared to the reference surface at stand-still. The maximum deflection occurred during the test is about 12 mm at the blade tip. As expected, the comparison of both – the pitch angle and the bending lines for the same flight conditions – shows the increase of the bending for higher pitch angle values. Surprisingly the highest bending value in Figure 10 does not occur for the highest pitch angle in Figure 9. Figure 10: Example span-wise slices at Y = -40 mm extracted from surfaces of different measurement points and normalized to the reference shape at this location ETTC 2015– European Test & Telemetry Conference The reason could be a flow separation at the blade tip for the highest pitch value like indicated by the decrease of the bending curves slope towards the blade tip too. 5. Conclusion The presented rotating camera system in combination with the measurement technique IPCT for the first time enabled the observation and direct measurement of a propeller blade’s behaviour in flight. Up to now such a detailed measurement was not possible and the real behaviour of the blade during flight had to be simulated or theoretically calculated. Strain gauges and accelerometers have been the only means to obtain the blades deformation but merely at a limited number of single locations and with a modification of the blades surface affecting the aerodynamics and the mechanical properties. For the IPCT measurements with the rotating camera the only modification on the airplane was the painting of one propeller blade with a dedicated IPCT pattern. The camera itself has been directly mounted on the propeller hub of the airplane. After the measurement campaign the camera system and the paint of the propeller blade were removed and the airplane was able to go back to normal service directly. The preparation of the test, especially the creation of the required dot pattern, was performed by using an in-house digital mock-up, thus avoiding costly pre-tests on the airplane. The Evektor VUT100 COBRA was only required for the final flight test and the effort for the flight test installation was very low. A classical installation of strain gauges for a similar surface measurement would have been nearly impossible due to the required number of sensors and the fitting on the propeller would have taken much longer. The presented rotating camera system enables the self- sustained recording of images of the blade at any phase angle and with a frame rate of maximum 45 double frames per second equivalent to one image pair per revolution of the Cobra propeller. Further development to increase this frame rate is presently performed by the authors. The IPCT processing of the recorded images delivers continuous 3D surfaces of the investigated blade. These surfaces can directly be used for the comparison of the real shape with shapes estimated from numerical design studies e.g. to validate such methods or to validate the performance of the blade design. In addition the obtained in-flight shapes can be used for numerical calculations of the flow with real geometries. Extractions of points or lines from the IPCT surfaces can be done in order to virtually obtain “local sensor data” to be processed in the standard approach like for strain gauges or accelerometers. By using another flange, the rotating camera can also be applied to measurements on other airplanes and with some modification also on large aircraft propellers. With the lessons learned on the small airplane, where the worst vibrations and centrifugal forces occur, similar devices can now be designed to carry out measurements on rotors of helicopters or wind turbines. 6. Acknowledgement The authors would like to thank U. Füllekrug (DLR) and T. Korbiel (AGH - University of Science and Technology, Cracow) for their assistance in carrying out the vibration test of the rotating camera and R. Ollech (DLR) and L. Dorn (DLR) for their cooperation in its mechanical construction and rotation test. Furthermore they would like to thank P. Růžička (EVEKTOR), Z. Tvrdik (AVIA Propeller) and K. Ludwikowski (HARDsoft) for their valuable work on the camera system and the flight tests. 6. References [1] Boden, F. (ed.): “AIM² Advanced Flight Testing Workshop - HANDBOOK of ADVANCED IN-FLIGHT MEASUREMENT TECHNIQUES”, BoD Books on Demand, Norderstedt, 2013. [2] Lanari, C., Stasicki, B., Boden, F., Torres, A.: “Image Based Propeller Deformation Measurements on the Piaggio P180.” In: Research Topics in Aerospace Advanced In-Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [3] Boden, F., Maucher, C.: “Blade Deformation Measurements with IPCT on an EC 135 Helicopter Rotor.” In: Research Topics in Aerospace Advanced In- Flight Measurement Techniques, Springer Verlag, Heidelberg New York Dordrecht London, 2013. [4] Sicard, J., Sirohi, J.: “Measurement of the deformation of an extremely flexible rotor blade using digital image correlation.” Measurement Science and Technology 24(6):065203, 2013. [5] Fleming, G. A., Gorton S. A.: “Measurement of Rotorcraft Blade Deformation Using Projection Moiré Interferometry.” Shock and Vibration, vol. 7, no. 3, pp. 149-165, 2000. doi:10.1155/2000/342875. [6] Sirohi J., Lawson M. S.: “Measurement of helicopter rotor blade deformation using digital image correlation.” Opt. Eng. 0001;51(4):043603-1-043603-8. doi:10.1117/1.OE.51.4.043603. [7] Krauss, K.: “Photogrammetry: Geometry from Images and Laser Scans”, Walter DeGruyter, 2007 7. Glossary 3D: three dimensional AIM : Advanced In-flight Measurement Techniques (EU project) AIM²: follow up project of AIM (see above) CMOS: Complementary Metal–Oxide–Semiconductor DC: Direct Current DIC: Digital Image Correlation f: focal length IPCT: Image Pattern Correlation Technique LiFePO4: Lithium (Li) Iron (Fe) Phosphate (PO4) rpm: Revolutions Per Minute SSD: Solid State Drive WLAN: Wireless Local Area Network X, Y, Z: Cartesian Coordinates Δ, d: Difference of the Parameter θ: Blade Pitch angle φ: Phase angle of the propeller ETTC 2015– European Test & Telemetry Conference

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Development of fibre optic strain and pressure instruments for flight test on an aerobatic light aircraft N.J. Lawson1 , R. Correia 2 , S.W. James2 , , J.E. Gautrey1 and R.P. Tatam2 1: National Flying Laboratory Centre, Cranfield University, Cranfield, Beds. MK43 0AL, U.K. 2: Engineering Photonics, Cranfield University, Cranfield, Beds. MK43 0AL, U.K. Abstract: Fibre optic based measurement systems offer advantages for flight testing of aircraft. Through the EU FP7 program ‘Advanced In-Flight Measurement 2 (AIM2 )’, Cranfield University developed and flight tested Fibre Bragg grating strain and Fabry-Perot pressure fibre optic sensors on a certified Bulldog aerobatic light aircraft. Laboratory development demonstrated that the strain sensor had a resolution better than 0.2m/m and the pressure sensor to have a resolution better than 0.2 Pa over 400 Pa. Flight tests have proven the sensors performance at several kHz data rates with steady state and dynamic manoeuvres (-1g to +4g) including a spin. Keywords: Fibre Bragg grating, fibre optic Fabry-Perot sensor, flight test instrumentation 1. Introduction A recent research program entitled Advanced In-Flight Measurement 2 (AIM2 ), supported by the European Framework 7 (FP7) funding, has allowed a group of 10 partners to develop advanced in-flight measurement techniques based primarily on optical methods. The foundations of AIM2 were laid in the FP6 research program Advanced In-Flight Measurement (AIM) with significant outcomes including the application of pressure sensitive paint (PSP), LIDAR, background orientated Schlieren (BOS), particle image velocimetry (PIV) and image pattern correlation technique (IPCT) to flight test using a series of fixed wing and rotary wing flight test platforms [1]. The development of a range of different flight test measurement techniques has been ongoing since the 1950’s and has been well documented in publications such as AGARD and RTO Flight Test Instrumentation Series AGARDograph 160 (AG 160) [2,3]. In a given flight test programme, it is not uncommon for a comprehensive range of simultaneous parameters to be measured from a flying aircraft including, static and total pressure, temperature and wing shape, including twist. On a large transport aircraft, measurement points can be positioned significant distances (tens of metres) from the sensor power supplies or acquisition boxes and this can present electromagnetic compatibility (EMC) limitations and limit data quality and accuracy. More recent measurement techniques, in particular optical or fibre optic methods can overcome these limitations and provide improvements in temporal resolution and accuracy [4-8]. More specifically, fibre Bragg grating (FBG) sensors can be used to measure surface strain [4-7] and Extrinsic Fibre Fabry-Perot Interformeters (EFFPI) sensors can be used to measure static pressure [8]. The following paper describes the application of both the FBG and EFFPI fibre optic sensor systems to flight tests by using a modified Bulldog aerobatic light aircraft. Over a series of seven flight tests, the sensors were successfully demonstrated through a set of steady and dynamic flight test conditions (-1g to +4g), although for the pressure measurements, an offset was found between the optical method and the traditional Kulite sensor. At the time of writing, these discrepancies are still under investigation but are thought to be linked to a calibration-temperature effect or sensor contamination. 2. Development of Bulldog Flight Test Platform The objective of the Cranfield AIM2 research was to develop and flight test a fibre optic based surface strain and static pressure sensor which would be applicable to flight test instrumentation on both large scale and light aircraft. As Cranfield University already has extensive experience of development, design and fabrication of FBG sensors, this method was chosen for the strain measurement system. For the static pressure system, Cranfield University had also completed previous work with FBG’s for static pressure measurement [9]. However, this previous work did not demonstrate sufficient resolution for the project in this paper. Therefore it was decided a Fabry-Perot based sensor would be developed for the pressure sensing. As far as the authors are aware, this application of a Fabry-Perot pressure to flight test is seminal work. 2.1 Fibre Bragg Grating Surface Strain Sensors The principle of the FBG sensor is to measure the grating period of a fibre optic core which has been modified with a periodic refractive index grating. As the Bragg wavelength, generated by specific reflections inside the fibre core, is dependent on the period of the grating, as the fibre environment changes such as fibre strain and temperature, the Bragg wavelength changes. This wavelength can be monitored using an FBG interrogator and if using a suitable calibration, direct measurements of surface strain can be made. The major advantage of the FBG system is that a series of unique grating frequencies ETTC 2015– European Test & Telemetry Conference can be etched onto a single fibre and simultaneously monitored using a single FBG interrogator. FBG interrogators are available as commercial off-the-shelf items with sample rates of up to 20kHz. This multi-sensor FBG system can be extended to measuring a distribution of strain points on a surface such as a wing and by using a suitable model, simultaneous surface or wing shape is possible in-flight [10]. In this project, Cranfield University, using SMF-28 fibre, fabricated five FBGs at positions specified for the flight test (see section 2.3). The fibre was hydrogen loaded to increase photosensitivity and the FBGs were etched into the fibre core using a series of five different phase masks and a frequency-doubled Nd:YAG pulsed laser operating at 266nm to generate FBGs with five different Bragg wavelengths. Sections of the polyacrylate fibre buffer jacket were removed before fabrication. All five FBGs had a length of around 4mm and were not recoated. Following the fibre fabrication a laboratory strain calibration was completed by mounting a section of the fibre using Cynoacrylate superglue onto a test sample of equivalent material to the aircraft wing skin. This stage of development was required to check the package and mounting method for the fibre before mounting it for the flight test. A conventional 2mm length resistive foil strain gauge (RFSG) was also mounted adjacent to the FBG under test and the test sample was loaded up to 600. This calibration gave an FBG repeatability better than 0.29% (1.7) of full scale with high linearitywhich compared to 0.41% (2.4) for the conventional strain gauge system. A second identical fibre FBG was then fabricated and mounted onto the aircraft port wing and a further calibration completed before flight also using a set of five RFSGs mounted adjacent to each FBG point. The results from this pre-flight calibration showed similar repeatibility to the laboratory test. To simplify the aircraft modification, the wing mounted RFSGs were then disconnected and left glued onto the wing next to the FBGs. 2.2 Optical Fibre Fabry-Perot Pressure Sensors The requirement for the fibre optic pressure sensor was to measure steady and unsteady pressure on the aircraft from a selected point over a pressure range expected when flying the full g-range and speed range of the aircraft. Prior to development of the pressure sensor, in order to simply the certification process, a decision was made to modify an existing test plate behind the cockpit (see section 2.3). As significant variations of pressure can occur at different points on the aircraft under different flight conditions, a computational fluid dynamic (CFD) model and wind tunnel model were developed for the aircraft [11]. This work showed the expected pressure coefficient Cp range at the test point to be -0.05 < Cp < 0.05 which equated to less than 200 Pa of relative pressure change at the sensor point. Therefore a resolution of less than 2 Pa was specified for the sensor. The resolution requirements indicated an interferometric type sensor would be required for the fibre optic pressure system. Therefore an extrinsic fibre Fabry-Perot Interferometer (EFFPI) sensor was developed for the flight test. EFFPIs consist of an optical cavity formed by reflection at the distal fibre end mixing with light reflected from a reflective flexible diaphragm mounted onto the fibre end. As the pressure changes at the end of the fibre, the diaphragm deforms, changing the optical path length, and leading to an interferometric signal[8]. Therefore by monitoring the reflected signal, with a suitable calibration, the pressure at the diaphragm can be dynamically measured with high absolute and temporal resolution. A further advantage of the EFFPI method is with an careful cavity design, an identical interrogation unit, as used for the FBG system, can also be used to measure the EFFPI modulated signal. With reference to Figure 1 below, to fabricate the EFFPI sensor, a 125m fibre was mounted into a 2.43mm diameter ceramic fibre optic connector ferrule and a Mylar microphone sensing membrane mounted onto the polished top of the ferrule. To ensure an optimum cavity separation between the fibre tip and the membrane, before gluing the fibre, the fibre was moved up and down the ferrule sleeve whilst the output spectrum was monitored over a wavelength range of 27.45 nm. This range corresponded to a cavity length of 387 m. The cavity spacing was then set at a point which would produce the broad channeled spectrum intereference fringes required for the FBG interrogator. A 0.5mm diameter venting tube was also mounted onto the side of the fibre inside the ferrule to allow the sensor to measure reference pressure. The complete sensor was then glued into a 3.5mm Zirconia sleeve for mounting in the aircraft. It must be noted, with more specialist fabrication methods, the sensor size could be reduced considerably if the application required it. 2.43mm Mylar microphone sensing membrane fibre optic ferrule fibre core (10m) venting tube (0.5mm)fibre cladding (125m) Figure 1: Schematic of EFFPI ETTC 2015– European Test & Telemetry Conference To calibrate the EFFPI pressure sensor, the fibre was connected to a wavelength tuneable laser source (Santec HSL 2000) and two optical detectors (New Focus 2011- FC). Through an optical coupler, the fibre output signal was then interrogated using an NI PXI 5152 acquisition card and PC, as the pressure on the diaphram was varied though a reference pressure range up to 400 Pa. The resulting calibration showed a resolution better than 0.33% (1.3 Pa) of full scale over a range of 400 Pa. Further dynamic laboratory tests were also completed using a signal generator and loudspeaker arrangement as an input to the sensor. From this test, the dynamic sensor response appeared acceptable for frequencies up to 10 kHz which in this case was the limit of the loudspeaker setup. To further validate the EFFPI sensor during the flight test, a conventional Kulite XCQ-093 (2 psi range) unsteady pressure sensor was also mounted adjacent to the fibre sensor. This Kulite was statically calibrated using a similar input arrangement to the EFFPI and the results showed acceptable linearity and a resolution better than 0.15% (0.4 Pa) over a working scale of 250 Pa. 2.3 Bulldog Light Aircraft Test Bed To test the fibre optic strain and pressure sensors, a Scottish Aviation Bulldog aerobatic light aircraft was modified under Certification Standard 23 (CS-23) as a ‘minor modification’. This allowed the aircraft to be certified without requiring a flight test program following the certification and resulted in no limitation to the aircrafts operating speeds or g-load envelope (-3g to +6g). There was also no change to the aircraft’s centre of gravity and a minor increase in the aircrafts mass of around 9kg with a maximum take-off weight of 1066kg. The overall modification consisted of removing the existing floor plate behind the pilots seat and replacing it with a lightweight honeycomb 0.5 inch thick Teklam plate. A power supply box and a Smartscan Aero fibre optic interrogator box were then mounted onto the plate and the power supply box connected to the 28V aircraft power supply. The Smartscan box was connected to the power supply box and the FBG and EFFPI fibre optics. A further UEI data logging box was also mounted in the power supply box and connected to the Kulite pressure sensor. Further additions to the overall system included an attitude heading and reference system (AHRS), also connected to the UEI data logging cube, and a remote trigger to allow synchronisation of the Smartscan Aero box and the UEI data logger. Finally, additional hand held equipment including a PDA and Druck portable barometer were also carried in the onboard to monitor the cockpit reference pressure throughout the flight. Figure 2 shows the general arrangement of the set-up inside the aircraft where in summary the following equipment consisted of:  Bespoke power supply box (0.36 kW)  UEI data logging cube  Trigger box  SBG Systems SBG Systems IG-500A-G4A2P1-B AHRS (mounted adjacent to CoG)  Smartfibre Aero fibre optic interrogation box  Fibre optic 1 : five wing mounted FBGs  Fibre optic 2 : EFFPI pressure sensor (fuselage test plate)  XCQ-093 Kulite pressure sensor (fuselage test plate)  PDA connected to Druck DPI 740 barometer  on-board cockpit camera and mount Fabry-Perot (Fibre2)Kulite G1 G2 G3 G4 G5 FBGs (Fibre1) Smartscan Aero Box Fibre1 Trigger Button UEI Data Logging Cube Laptop LAN LAN Aircraft Power Supply Interface Fibre2 Kulite 28V DC cockpit reference pressure AHRS Figure 2: Schematic of Bulldog flight test instrumentation The FBG sensors were mounted using cyanoacrylate superglue at five points on the port wing spar centreline at 200mm, 400mm, 1200mm, 2200mm and 2600mm relative to mainplane station 26, which was positioned chordwise 350mm from the fuselage side. The outermost point was mounted inside a hypodermic sheath to provide temperature compensation to the other four points. The complete fibre length was then protected by covering with a length of 3M 425-50 speed tape. As indicated in the previous figure, the EFFPI and Kulite sensor were mounted on top of the fuselage on a 161mm diameter test plate, 35mm apart, with the circular plate positioned between the cockpit rear bulkhead and the front of the tailplane. The wire and fibre outputs of the two pressure sensors were then fed back along the inside of the fuselage on a common loom and connected to the Smartfibre and UEI interrogation units. A single pressure static tube, which terminated in the cockpit, was also connected to both sensors to provide a common cockpit reference pressure. This pressure was monitored by the PDA and Druck portable barometer throughout the flight. Also throughout the flights an external pilots view and an internal view of the main cockpit instruments was monitored using a 50Hz ActionCam mounted in the roof of the cockpit. ETTC 2015– European Test & Telemetry Conference 3. Flight Test Results The flight test program consisted of seven flights completed in June and July 2014. Flights 1 – 6 were used to troubleshoot the measurement systems with issues including data storage loss and Kulite earthing problems. On the initial flight, the tape covering the FBG fibre optic also became detached. A subsequent investigation found the incorrect tape had been fitted. On flight 7, however, the FBG, EFFPI and Kulite sensors all worked correctly and a series of steady state and dynamic manoeuvres were completed over a 50 minute flight which included:  Straight and level profile (67 knots IAS 8400 feet)  Straight and level profile (100 knots IAS 8400 feet)  3 turn left spin (8400 feet – 5500 feet)  Loop (5500 feet - +0.5g to +4g)  Stall turn (5500 feet +0g to 4g)  Slow left roll (5500 feet -1g to 1.5g)  Barrel roll (5500 feet 0.5g to +3g) In all cases a standard altimeter pressure setting of 1013 millibar was used and sea level conditions were ISA +10o C. In the following results, based on the calibrations, uncertainties in the data are FBG strain +/-1.7EFFPI and Kulite pressure +/-1.3 Pa and +/-0.4 Pa respectively. AHRS data was also recorded throughout the flights to allow additional assessment of the dynamic manoeuvres. The AHRS data is presented as inertial axes format (ax, ay, az), i.e. relative to the box axes. Pitch, roll and yaw were also analysed using Quaternion conversions as specified by the manufacturer. For all data, on start-up a common time stamp was achieved by aligning the different system time clocks using a Laptop-LAN connection. This alignment was better than 300ms for all the different systems. 3.1 Straight and Level Results In the first part of flight 7, straight and level results were used to analyse several pressure coefficient Cp characteristics at two different indicated airspeeds of 67 knots and 100 knots. From previous analysis [12], these two different airspeeds correspond to true airspeeds of 40.6m/s and 60.6m/s and angles of attack of around 8o and 4o respectively. Table 1: Summary of straight and level flight test results Speed (knots) Cp Kulite Cp % rms Kulite Cp EFFPI Cp % rms EFFPI 67 0.4131 0.30 0.1315 3.2 100 0.2949 0.29 0.1103 1.7 To ensure stable values of Cp during the tests, the aircraft was flown with reference to the Druck barometer output with samples taken when the Druck output was stable to within 10 Pa. This equates to less than 3 feet deviation in aircraft altitude during a given sampling period. Using these criteria, the best sample sets were analysed and the results are presented in Table 1. The results in Table 1 show a reduction in Cp with increasing airspeed or reducing angle of attack. This result is consistent with the wind tunnel and CFD analysis [11]. However, the magnitude of Cp between the Kulite and EFFPI results is significantly different with the EFFPI values of Cp around 3 times lower than the Kulite values. This difference equates to around 200 Pa between the sensors although the EFFPI values of Cp are much closer to the values reported in the CFD and wind tunnel data. This discrepancy is discussed further in section 4. 3.2 Spin and Aerodynamic Manoeuvres In the second part of flight 7, the dynamic manoeuvres, including the spin, were completed to assess the performance of the fibre optic and Kulite sensors over a wide part of the aircraft flight envelope. During the manoeuvres, AHRS data was also recorded for comparison with the strain and pressure data. Figure 3 shows the dynamic data recorded from all the sensors. An initial study of the data shows that although the general trends of the EFFPI data and Kulite unsteady pressure data are similar for all the manoeuvres, discrepancies still exist between the two sets of data which are greater than the levels of uncertainty and there is no constant offset between the sets of dynamic data as was observed during the steady state measurements. The Kulite data is smoother due to the low pass filters applied to the data during post processing. But the variations between the two sets of dynamic data, however, still varies from between zero to around 400 Pa following the exit from the spin for example. Figure 3: Sample of flight test data recorded during dynamic manoeuvres Figure 4 shows a more detailed output of the FBG strain unsteady data recorded during the spin manoeuvre where the spin rotations, correlating with the AHRS data, can be clearly seen as well as the recovery stage, where the ETTC 2015– European Test & Telemetry Conference aircraft load changes from a positive g-load to a negative g-load as a negative angle of attack is used to impart recovery. Further spectral analysis of this temporal data is shown in Figure 5 where the AHRS data predicts a spin frequency of 0.4Hz which compares to the FBG spin frequency of 0.39Hz. Here the uncertainty in the AHRS spectral data is +/-0.035 Hz and the Fabry-Perot and FBG spectral data +/-0.05Hz. Examination of the on-board camera images and fixed ground features during the spin also confirmed a frequency of 0.4 Hz. Figure 4: Strain and AHRS data recorded during the spin manoeuvre Figure 5: Spectral analysis of FBG and AHRS data during the spin Further AHRS and FBG data taken from the loop and slow roll manoeuvres is shown in Figure 6 and Figure 7 respectively. If we consider the loop, the initial pull up is visible where in this case, increases in negative strain correspond to increases in g-load on the wing. This change in g-load is confirmed in the AHRS data. The top of the loop is also visible where the g-load on the aircraft reduces to around 0.5g with an increase in load as the pull-out of the loop is completed. In terms of the slow roll data, the sequence of ‘entry’, ‘inversion’, ‘maintain’ and ‘recovery’ stages of the roll can also be seen with corresponding changes in the AHRS data. Similar trends were also found in the EFFPI data as can be seen in Figure 3, although the offset of around 200Pa between the values of the Kulite and EFFPI data is also visible. Figure 6: Strain and AHRS data for the loop manoeuvre Figure 7: Strain and AHRS data for the slow roll manoeuvre 4. Discussions The previous section has presented steady state straight and level data and dynamic FGB and EFFPI data at sample rates of up to 2kHz. During this wide range of flight conditions and g-loads, the fibre sensors, data logger and Smartscan fibre interrogation all behaved as expected and data was successfully recorded and post- processed for the entire duration of the flight. Checks of the equipment on the aircraft following this flight test series and frequent use of the aircraft for student flight training has also found the sensors to be stable and robust. There were, however a number of issues with the results which still need to be resolved. The main issue relates to the discrepancy between the EFFPI sensor and the Kulite pressure sensor. A post-flight calibration of the Kulite did not find any significant variation from the original factory calibration. At the time of writing, it is thought the discrepancy may be related to the temperature variation of either the EFFPI sensor or the Kulite sensor or a combination of both as the Kulite was operated around 10o C below its temperature compensated range. It may ETTC 2015– European Test & Telemetry Conference also be related to sensor contamination or an aerial and beacon light protuberance which were positioned adjacent to the pressure sensors. Further studies need to be completed to resolve this difference. 5. Conclusions This paper has presented flight test data from two fibre optic sensors which were fitted to a certified aerobatic category Bulldog light aircraft. The FBG system recorded successfully wing surface strain data during dynamic manoeuvres over a normal g-range of -1g to +4g with sufficient resolution to allow analysis of temporal features of spin. The EFFPI pressure sensor, fitted onto the top of the aircraft fuselage, also allowed analysis of both the dynamic and steady manoeuvres although an offset was found between a conventional Kulite pressure sensor fitted adjacent to the EFFPI sensor. Future work aims to isolate the sources of this error and then complete further flight test campaigns of the sensors. 6. Acknowledgement The authors would like to acknowledge support from European Framework 7 funding, contract number 266107 ‘Advanced In-Flight Measurement 2’ and EPSRC Grant number EP/H02252X/1. (For enquiries relating to access to the research data or other materials referred to in this article, please contact Cranfield University Library and Information Services—library@cranfield.ac.uk). 7. References [1] Boden F., Lawson N., Jentink H.W., Kompenhams J., "Advanced In-Flight Measurement Techniques", Springer-Verlag, Berlin (2013). [2] Kottcamp E., Wilhelm H. and Kohl D. “Strain Gauge Measurements on Aircraft”, AGARD and RTO Flight Test Instrumentation Series AGARDograph 160 (AG 160), Volume 7 (1976) [3] Wuest W., “Pressure and Flow Measurement”, AGARD and RTO Flight Test Instrumentation Series AGARDograph 160 (AG 160), Volume 8 (1980). [4] Measures R.M. “Structural Monitoring with Fiber Optic Technology”, London, Academic Press (2001) [5] D Betz, L Staudigel, M N Trutzel, "Test of a fiber Bragg grating sensor network for commercial aircraft structures," Optical Fiber Sensors Conference Technical Digest, 2002. OFS 2002, 15th , pp. 55- 58 (2002) [6] J-R Lee, C-Y Ryu, B-Y Koo, S-G Kang, C-S Hong, C-G Kim, "In-flight health monitoring of a subscale wing using a fiber Bragg grating sensor system," Smart Materials and Structures 12, pp. 147-155 (2003) [7] J Read and P D Foote, “Sea and flight trials of optical fibre Bragg grating strain sensing systems” Smart Materials and Structures 10 1085–1094 (2001) [8] Rao, Y.J. “Recent progress in fibre-optic extrinsic Fabry-Perot interferometric sensors,“ Optical Fibre Tech., 12(3), pp. 227-237, 2006 [9] Chehura E, James SW, Tatam RP, Lawson N and Garry KP (2009) “Pressure measurements on aircraft wing using phase-shifted fibre Bragg grating sensors,“ 20th International Conference on Optical Fibre Sensors, 2009, Edinburgh, 5th – 9th Oct 2009 [10] J-R Lee, C-Y Ryu, B-Y Koo, S-G Kang, C-S Hong, C-G Kim, “In-flight health monitoring of a subscale wing using a fiber Bragg grating sensor system,“ Smart Materials and Structures 12, pp. 147-155 (2003) [11] Lawson N.J., Gautrey J.E., Salmon N., Garry K.P., Pintiau A., “Modelling of a Scottish Aviation Bulldog using Reverse Engineering, Wind Tunnel and Numerical Methods,“ IMechE Part G, Journal of Aerospace Engineering pp7, DOI: 10.1177/0954410014524740 (2014) [12] Lawson N.J., Salmon N., Gautrey J.E., Bailey R. “Comparison of Flight Test Data with a Computational Fluid Dynamics Model of a Scottish Aviation Bulldog Aircraft” The Aeronautical Journal 117(1198) 1273- 1291 (2013) 8. Glossary FBG: Fibre Bragg grating EFFPI: Extrinsic Fibre Fabry Perot Interferometer RFSG: Resistive Foil Strain Gauge AHRS: Attitude and Heading Reference System CFD: Computational Fluid Dynamics (n – 1): Normal g-load increment Cp: Pressure Coefficient ax: AHRS Inertial longitudinal axes ay: AHRS Inertial lateral axes az: AHRS Inertial directional axes

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Recent achievements in Doppler Lidars for aircraft certification C.Besson, B.Augère, G.Canat, A.Dolfi-Bouteyre, D.Fleury, D.Goular, J.Le Gouët, L.Lombard, C.Planchat, M.Valla, W.Renard ONERA, Chemin de la Hunière, 91321 Palaiseau (France) Abstract: We report on the performance and field tests of recently developed fiber Doppler Lidars. Two types of systems are considered: ground based range resolved Lidar and airborne true air speed sensor. These systems can be used for certification of new air vehicles (fixed wing or rotary wing) and ease their integration in air traffic Keywords: flight test instrument, certification, Lidar, fiber laser, coherent detection, true air speed, TAS, wind, turbulence, EDR 1. Introduction During landing and take-off, a minimal distance separation between aircrafts is necessary in order to avoid the risk of wake vortex encounter from a preceding aircraft. Indeed, wake vortices are two coherent counter- rotating flows created behind the aircraft wings and they induce a potentially dangerous rolling moment to the following aircraft. Atmospheric conditions determine wake vortices lifetime and trajectory. It has been shown that wake vortices dissipation rate varies depending on atmospheric turbulence level. They can also be transported out of the way on oncoming traffic by cross- winds. Other air disturbances such as wind gust or rapid change of the incoming wind direction are also detrimental to airport traffic flow. Thus anticipation of these phenomena in the vicinity of airport is a key information for air traffic optimization and safety. Not only airport safety but also flight tests of new air vehicle, manned or unmanned, could benefit from accurate knowledge of air dynamics disturbances in the vicinity of airports. Wake vortices locations and trajectories, wind turbulence level or wind maps are ancillary data that can be used during flight tests analysis. They can be provided by long range range resolved Doppler Lidar or Radar which measure the wind speed with a high spatial resolution [1]. Such sensors are being evaluated for airport safety and re-categorization purpose in the framework of various projects such as SESAR, CREDOS or FIDELIO [2][3]. Measuring the wind speed at a short distance from the aircraft is also useful for true air speed retrieval during certification procedures. Indeed, calibration of air data sensor of aircraft requires cumbersome procedures including specific equipment and dedicated costly flight tests. Although several techniques have been developed over the years, the most direct and probably the most accurate way to directly obtain the correction factors is to compare the aircraft air data measurements with optically derived onboard measurements obtained non-intrusively from the free stream region in front of the aircraft. Calibrations of the pitot static system and vanes using a laser anemometer have an increased accuracy compared with those obtained with conventional techniques, such as using a towed cone, tower-fly-by, or a pacer aircraft. A short range Lidar allows a precise and remote measurement of air speed just outside the range of the flow disturbance from the aircraft: it is able to give the velocity in real time with no in-flight calibration using autonomous onboard equipment and without a priori assumptions on the atmosphere. In this paper we review recent Lidar achievements at Onera and report on performance and field tests results for two types of Lidar: ground based range resolved Lidar and airborne true air speed sensor. Doppler Lidar (Light detection and ranging) is a well-known sensing technique for the retrieval of air speed. The systems described in this paper are based on Doppler shift determination of a light wave obtained from a pulsed or continuous single frequency laser that is reflected on natural atmospheric aerosols (Mie scattering). The aerosols are the wind field tracers to be analysed. The frequency shift is proportional to the air velocity and is detected via an interferometer measuring the beat frequency between the backscattered wave from aerosols and a reference wave (local oscillator). Coherent mixing enables the recovery of the backscattered wave phase. This phase contains the radial velocity information (along the laser line of sight). It also enhances the detection sensitivity thanks to the optical product of the signal beam with the reference beam which enables small target signal amplification. Lidar based on fiber technology are well adapted to on-the-field or airborne operations thanks to their intrinsic vibration-resistant designs. When emitting around 1.5 µm, they can benefit from telecom industry components with large market at competitive costs and increased reliability. They offer simplified maintenance procedures compared to free-space technology and enable compact system designs. ETTC 2015– European Test & Telemetry Conference 2. Range resolved wind Lidar Long range range resolved coherent scanning wind Lidar can provide radial wind velocity that can be processed in 3D wind maps as well as EDR. For airport safety applications, extended range up to 10 km, as well as fast large area coverage with refresh rate below 10 seconds are necessary. It is now possible thanks to recent progress in high power single-frequency all-fiber lasers, a key component of the system. Indeed, such Lidar require narrow linewidth (few MHz) pulsed laser sources emitting in the µs regime with kW peak power [4][5]. The development at Onera of new high power lasers yielded to such class wind Lidars which have been field tested in 2014 and 2015. 2.1 High power pulsed fiber laser Eyesafe, all-fiber laser sources based on MOPFA (Master Oscillator Power Fiber Amplifier) architecture offer many advantages over bulk sources such as low sensitivity to vibrations and emission versatility. These sources have very good efficiencies and can bear high thermal load, enabling high repetition rate pulsed emission typically from 10 to 100 kHz. However narrow linewidth MOFPA peak power is limited by stimulated Brillouin scattering (SBS) and specific strategies must be deployed to mitigate this effect which usually limits the output peak power to ~100W in single mode fibers. The fiber power handling can be improved without degrading the beam quality by increasing the fundamental mode effective area while maintaining a (quasi-) singlemode propagation. For this purpose, various LMA (large mode area) fiber designs have been proposed at 1µm. For 1.5µm operation, quasi-singlemode propagation in LMA fibers is more challenging. Indeed, the required high index codopants increase the core numerical aperture (NA) and the number of guided modes thus decreasing the resulting beam quality. For example, erbium-ytterbium doped fibers (Er-Yb) require high concentration phosphorous codoping. Ytterbium free, erbium doped fibers require alumina codoping. Commercial LMA fibers can typically emit up to 300W. For this reason, various strategies have been proposed to maintain a good beam quality (see [6] to [10]). Microstructured cores using Erbium-Ytterbium codoped materials have been proposed leading to 940W peak power. However they suffer from manufacturing complexity. We have also developed single-frequency all-fiber amplifiers based on Er-Yb doped LMA fibers with optimal composition. Pulse energy up to 450µJ was achieved with excellent beam quality. Another method to mitigate SBS is to apply a strain gradient along the fiber. This translates into the fiber into an acoustic velocity gradient along the fiber, and thus into an inhomogeneous broadening of the Brillouin gain spectrum. Thanks to this Onera proprietary method, we raised the peak power and energy of the laser source up to 600 W and 500 µJ respectively, which represents a gain of more than 3 dB compared to the same fiber source without strain gradient. 2.2 Long range range resolved wind Lidar tests High power MOFPA lasers can be integrated in a monostatic coherent Lidar architecture such as the one depicted on Figure 1. 50/50 90/10 Preamp. Ampli. de Puissance MI 1545 nm CW laser DET Traitement de signal Rétrodiffusion des aérosols 2 1 CSP Lame λ/4 2.5 3 3.5 4 4.5 Figure 1: MOFPA coherent fiber Lidar set up The master oscillator is a laser diode emitting 1545 nm in continuous regime. Its output is split thanks to a 50/50 fiber coupler. The signal channel (1) goes through an intensity modulator (MI) which shapes the pulse and is then amplified through a preamplification stage and a booster stage. At the output of the booster stage, the beam is polarized, quasi-single mode and its temporal shape is optimized for efficient coherent detection. A passive fiber pigtail is used to connect to the beam splitter. A polarization beam splitter (CSP) enables to circulate the emitted signal channel and the detection channel (2) thanks to a quarter wave plate. Up to the beam splitter all components are fibered. At that point however, the Brillouin effect would occur in fiber and free space optics are used. At the telescope output the signal beam is emitted in the atmosphere boundary layer with a slant angle of a few degrees. The signal backscattered from natural aerosols is coupled in fiber 2 of the beam splitter. It is mixed with the local oscillator thanks to a fiber coupler. The electric signal of the photodetector is analysed and processed in real time. Onera Lidar, LICORNE, is a convenient tool to test different Lidar configurations, signal processings, components and lasers. Wind speeds have been measured with various fiber amplifiers: a 100 µJ commercial amplifier and two homemade amplifiers delivering 400 µJ and 600 µJ. For similar optimal meteorological conditions and using the same Lidar parameters (10 kHz repetition rate, 1024 laser pulse accumulation) the typical maximum ranges obtained are respectively 2,5 km, 10 km and 15 km [9][10]. The range resolved Lidar has a spatial resolution of 150m and displays complete wind profiles in 100 ms. (see Figure 2). range (m) velocity(km/h) Plateforme LICORNE :12 Jan 2015 12:36:25.600 averaging =0.11378 s 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 14000 15000 16000 0 20 40 60 80 100 120 0.5 1 1.5 2 Figure 2: Long range wind Lidar LICORNE: air speed vs range - color code is the frequency power spectrum density - spatial resolution: 150m, maximum range: 16 km. The two high power lasers are appropriate for long range large area wind monitoring and these results are to ETTC 2015– European Test & Telemetry Conference the best of our knowledge the longest ranges achieved with a 1.5 µm all-fiber wind Lidar. 2.3 Long range scanning Lidar for airport field tests High power laser sources enable high speed data acquisition and therefore efficient scanning Lidar operation. This is needed for large area monitoring such as airport surveillance. During UFO European project Onera assembled a high power fiber amplifier emitting more than 500W peak power with excellent beam quality [1]. The laser has been integrated in Lesophere Windcube®, an ultra-fast scanning ground-based Lidar. The system shown on Figure 3 has been operating continuously for 2 month at Blagnac airport and enabled the retrieval of relevant quantities for future Weather Dependent Separation (WDS) concepts [11]. Thanks to the new high power laser, wind speed maps were provided at ranges beyond 10 km with a 45 degrees horizontal scan in no more than 8 seconds. A typical radial wind map is shown on Figure 4. Figure 3: UFO high power laser installed in Windcube Lidar Figure 4: radial wind map acquired at Blagnac airport 2.4 EDR retrieval Atmospheric conditions determine wake vortices lifetime and trajectory. It has been shown that wake vortices dissipation rate varies depending on atmospheric turbulence level (characterized by the eddy dissipation rate, EDR). EDR retrieval from Lidar data remains a relative new topic especially for addressing operational purposes as air traffic applications. Doppler Lidars can provide information about wind field spatial statistic and then give an estimation of the turbulence or Eddy Dissipation Rate [12][13][14]. The estimation can be made from - Doppler Spectrum width, - Velocity Variance, or - Velocity Structure function EDR estimation algorithms, although using different processing techniques, all rely on power spectral representations of turbulence. In this approach, the power spectrum density of the velocity fluctuations in the inertial range has a universal shape based on the Kolmogorov theory. For a scanning Lidar, the azimuthal structure function method is preferred. The EDR value is then obtained by fitting the 2/3 slope for the structure function Dv. The output value is often ε1/3 (m2/3 .s-1 ) 3/23/2 )( sCsD vv ε= Where s is spatial unit, ε is the energy dissipation rate and Cv ≈2, the Kolmogorov constant. EDR retrieval has been performed on Lidar data obtained during UFO trials at Toulouse Blagnac airport. An example is given below for a set of PPI scans (PPI with elevations from 2° to 45° and azimuth 47° to 293°), as a function of time, and for different altitudes. Figure 5 shows an example of velocity structure function fit for measurement points width-height between 150 and 200 m and averaged over 10 mn. 0 100 200 300 400 500 600 700 800 0 0.05 0.1 0.15 0.2 0.25 15-Apr-2014 09:14:47 averaged 10 mn m m²/s² EDR 1/3 = 0.0476 m 2/3. s-1 L0 = 505 m Azimuthal structure fonction for 150m https://www.eurocontrol.int/sites/default/files/content/docu ments/sesar/credos-d2-6-wp2-final-report-v11.pdf ETTC 2015– European Test & Telemetry Conference [4] J.-P. Cariou et Al. “Laser source requirements for coherent LIDARs based on fiber technology”, Comptes Rendus Physique, Volume 7, Issue 2, March 2006, Pages 213-223. [5] X. Zhang et Al. “Single-frequency polarized eye-safe all- fiber laser with peak power over kilowatt”, Applied Physics B, pp. 1-5 (2013). [6] G. Canat, et al., “Multifilament-core fibers for high energy pulse amplification at 1.5 µm with excellent beam quality”, Opt. Lett. 33, 2701-2703 (2008) [7] W. Renard et Al. “High peak power single frequency efficient Erbium-Ytterbium doped LMA fiber” Conference on Lasers and Electro-Optics Europe (CLEO Europe 2015).CJ-12.5;25/06/2015 [8] G.Canat et Al. “Eyesafe high peak power pulsed fiber lasers limited by fiber nonlinearity “Optical and Fiber Technology. Vol 20, N°6,pp. 678–687 10.1016/j.yofte.2014.06.010 [9] W.Renard et Al. “Beyond 10 km range wind-speed measurement with a 1.5 µm all-fiber laser source”, Conference on Lasers and Electro-Optics (CLEO 2014). San José (USA). 08-13/06/2014 [10] W. Renard et Al. “High peak power single frequency efficient Erbium-Ytterbium doped LMA fiber” Conference on Lasers and Electro-Optics (CLEO 2015). San José (USA). [11] L.P.Thobois et Al. “Wind and EDR Measurements with Scanning Doppler LIDARs for Preparing Future Weather Dependent Separation Concepts” AIAA Technical conferences 2014 [12] R.Frehlich et Al. “Measurements of boundary layer profiles in an urban environment”. Journal of Applied Meteorology, n°45, pp.821–837, 2006 [13] V.A.Banakh et Al. (1997). “Estimation of turbulent energy dissipation rate from data of pulse Doppler LIDAR”. Journal of Atmospheric and Oceanic Optics 10: 957–965. [14] R.Frehlich et Al. (1998). “Coherent doppler LIDAR measurements of wind field statistics”. Boundary-Layer Meteorology 86: 233–256. [15] Scott M. Spuler et Al."Optical fiber-based laser remote sensor for airborne measurement of wind velocity and turbulence", Applied Optics/Vol. 50, No. 6 / 20 (February 2011). [16] H. Inokuchi et Al., "Development of a long range airborne Doppler Lidar", 27Th International Congress of the Aeronautical Sciences (2010). [17] J.-P Cariou et Al., “All-fiber 1.5 µm CW coherent laser anemometer DALHEC. Helicopter flight test analysis”, 13th Coherent Laser Radar conference , Kamakura (2005). [18] B.Augere et Al. 1.5µm Lidar anemometer for True Air Speed, Angle Of Sideslip and Angle Of Attack measurements onboard Piaggio P180 aircraft; Measurement Science and Technology Journal, 2015. MST-102092.R1

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference New Upstream Rotating Measurement System for gases turbine exhaust gas analysis – NURMSys project A. Sylvain LOUME1 , B. Bertrand CARRE2 , C. David LALANNE3 , 1: AKIRA Technologies, ZA St Frédéric, Rue de la Galupe, BAYONNE France (64100) 2: AKIRA Technologies, ZA St Fréderic, Rue de la Galupe, BAYONNE France (64100) 3: AKIRA Technologies, ZA St Frédéric, Rue de la Galupe, BAYONNE France (64100) Abstract: Analysis and measurement of combustion efficiency is one key point for improvement of fuel consumption and pollutant emissions of aircraft engines. The thermal constraints due to combustion process leads to no possibility for direct measurement. A specific device has been developed to improve state-of-the art situation in terms of measurement precision and modularity to increase experimental capacities of our partner. Keywords: Combustion efficiency and pollutants emissions measurement, Gases turbine, Severe measurements environment, Integrated measurement device 1. Introduction In 2001, the ACARE (Advisory Council for Aeronautic Research in Europe) was founded to establish and maintain a technical roadmap [SRA – Strategic Research Agenda] outlining the orientations which should be taken to meet society's needs for aviation as a public mode of transport as well as noise and emissions reduction requirements in a sustainable way. This roadmap defined in particular very ambitious target for 2020 in terms of reduction of CO2 emission [-50%], NOx emission [-80%] and noise reduction [-50%]. All the stakeholders of the aeronautic market, and especially the engine manufacturers, are fully focused on research and development programs to reach those targets. Obviously, one key lever for emissions reductions is the improvement of aircrafts engine, and especially the optimization of efficiency of fuel energy conversion, namely the combustion process. To study combustion and though its efficiency and resultant pollutants emissions, it is essential to characterize exhaust burnt gases, measuring chemical species concentration, exhaust gases velocity, pressure and temperature in particular harsh environment. A new measurement device, dedicated to exhaust gases analysis and characterisation, was developed by AKIRA technologies. This project was part of the CLEANSKY program, the most ambitious aeronautical research program ever launched, and has been performed with our partner TURBOMECA – SAFRAN Group – which is the final user of this system. As the world leading helicopter engine company, TURBOMECA is deeply involved in the fuel conversion process efficiency optimisation. 2. Measurement device – concept description 2.1 State-Of-The-Art The measurement device developed is especially dedicated to the analysis of combustion taking place in small reverse flow gases turbines. The complex behaviour of this kind of combustors, linked to their compactness leads to even more difficult measurement precision and robustness. Nowadays, system architecture consists in positioning a rotating shaft equipped with racks downstream the combustor, surrounded by hot gases flow. Exhaust gases sampling system, thermocouples, velocity and pressure sensors are directly mounted on these racks and so moving compared to the combustion chamber. Displacement system is subjected to high level of thermal stress; typically exhaust gases temperature can reach 1600K. Despite continuous water cooling, this configuration correctly ensures neither good sensors operation nor measurement accuracy. This global approach also leads to very bulky installation as the sampling system and thermocouples need to be far enough the combustors to keep acceptable temperatures. Usual values can reach several meters between the combustion chamber and the measuring device. 2.2 Proposed solution The new system is completely embedded and useable with different combustors – combustion chambers. The measurements performed are the following ones: Exhaust gases composition Exhaust gases temperature The system allows to measure those characteristics on the complete area of the exit of the combustion chamber [annular shape] in order to established complete maps of temperature and gases composition at the exit of the combustors. The measurements are not only means values but also high frequency measurements in order to catch all the combustions dynamic behavior. ETTC 2015– European Test & Telemetry Conference In order to establish those 2D maps of gases composition and temperature, a rotating shaft fitted with 4 measurement rakes every 90 degrees, 2 for temperature measurements (typically 5 thermocouples each) and 2 for gases sampling, (one averaged and one at 5 discrete radii), is placed in the combustor axis. With the rotation of the shaft, this system allows to define the above maps in the racks plan, directly at the combustor exit. The breakthrough proposed is to completely change the concept of the measurement device and to move the gases sampling system, rotating shaft and motion controller upstream the combustor – in the intake area of the combustion chamber. By this manner the environment temperature for the measurements system and motion device is greatly reduced. Indeed the maximum temperature reached by the inlet air is never above 750K. Another aspect of the new concept is that the gases analyser and acquisition system is now fixed compared to the measurements racks and can be placed in more friendly environment This new concept leads to a much more compact system, more easily transportable from one test cell to another. This increased flexibility allows higher and faster testing capacity for different combustion chambers, so that research and development process is improved. Figure 1: New measurement system concept On the other hand, the new concept leads to face new difficulties. The complete system that is placed upstream the combustion chamber shall not interfere with the combustion process itself, means that the upstream flow does not have to be modified compared to the engine situation. This means that the system shall be integrated in the ogive of the engine, which is once again very tiny area especially for small gases turbines. The exhaust gases that are collected from the racks and moved to the gases analyser have to be maintain to a temperature of 190°C to avoid any water or unburnt hydrocarbons condensation that would leads to measurements deviation and mistakes. This temperature management system of the exhaust gases shall not interfere with the global cooling system through the rotating shaft. Because the gases analyser is now fixed compared to combustion chamber, an innovative sealing system that allows tightening of the pneumatic path of the gases from the racks to the analyser shall be implemented. This dynamic sealing has to be ensure through severe thermal conditions, means 450°C for the complete module at the combustion chamber inlet. A specific electrical rotating collector is installed for the thermocouples wiring. At the end, the cooling system and exhaust gases thermal management system, based on fluid transportation [air and water], also needs to implement dynamic sealing in severe thermal environment. All the exhaust gases path are located in one of the arms of the ogive, the water cooling system in another one and the cooling air and electrical wires in a third one. A specific attention has been put to ensure proper guidance of the rotating shaft, submitted to high temperature gradient and which is 400mm long. The driven shaft from the electrical motor to the rotating and collecting shaft is located through the fourth arm of the ogive. Because of external and geometrical constraints, the shaft diameter is 60mm, and all the exhaust gases path [6 in total], water path [2 for inlet and outlet], and the electrical wires for thermocouples are integrated is this tiny room, with corresponding tightening system. Figure 2: Superposition of the NURMSys and a typical gas turbine. Figure 3: Ogive arms with all pathes to the rotating shaft. ETTC 2015– European Test & Telemetry Conference 3. Measurement device – functional and technical description 3.1 Driven system for rotating shaft The shaft is driven by and external electric motor which is once again in a fixed compared to the combustion chamber. The motor is driven with an encoder that allows choosing the working mode: continuous motion or step by step. In continuous mode, the constant speed is 1 rotation in 12 minutes, and the “exploration” to establish the 2D maps is made over 370° to take into account gases transfer and analysis response time. The step by step mode allows making measurements for a given angular area of the combustion chamber. In this mode, a quicker rotating speed of 1 revolution per minute is available. One specific point is that the position measurement and servitude of the shaft and racks is absolute, means that there is no information losses in case of electricity cut and bench shut down. The coupling between the driven shaft through the ogive arm and the rotating shaft is made through a conical gear, and specific ball bearings are used to ensure rotating shaft guidance. Those ball bearings are specially studied in order to support severe thermal stress and proper precision for shaft guidance at high temperature. Figure 4: External electrical motor. 3.2 Measuring racks The measuring module is based on four racks, placed every 90 degrees around the shaft axis. The racks are made using 3D printing method with Nickel based metal. Two of those racks, placed in opposite position, are dedicated to the temperature measurement of the exhaust gases. 5 thermocouples per rack in different radial positions are integrated [K or B types] and another one for water return is installed to monitor cooling temperature. The two other racks are dedicated to exhaust gases composition analysis. On one rack 5 pathes located at different radial position are installed to monitor the gas composition in the 2D dimension. On the other rack, only one path is installed for mean measurement. 3.3 Thermocouples wiring The thermocouples wiring is located through one gallery drilled into the shaft. The electrical rotating collector is located inside the ogive of the system. The temperature of the collector has to be kept below 100°C, so that a specific insulation system with ceramic based material associated with a dedicated water cooling system have been implemented. Figure 5: Insulation of electrical rotating collector. 3.4 Air cooling A specific air cooling system is implemented for thermal management of the conic gearing and the rack holder at the bottom of the rotating shaft. This air is coming from one arm of the ogive, goes to the gearing and through the shaft, and is then expend with the exhaust gases downstream the racks, so that it has no impact on the complete combustion process. The maximum static pressure inside the module is 20 bars, and the pressure for the air cooling system is 15 bars. Finite elements computations have been performed to evaluate strength of the mechanical parts submitted to high pressure and temperature constraints. In addition to device resistance and durability, those computations were needed to ensure controlled deformation of the parts and so precise position of the racks and so proper measurements. The multi material nature of the complete assembly and the high thermal gradient environment led to even more complex computations that were needed to avoid any risk of locking during motion of the system, lack of tightening of the pneumatic and liquid collectors, of wearing/clearance inside the conical gearing – proper position serviture and measurement or the racks. ETTC 2015– European Test & Telemetry Conference Figure 6: Air cooling outlet at bottom of the module. 3.5 Water cooling A dedicated water cooling system is installed to ensure cooling of the racks and to stop any chemical reactions inside the exhaust gases during the travel from the racks to the analyser. The dynamic interface is based on the same principle as the pneumatic collector. Figure 7: Principle of the fluids rotating collector. The implemented solution does not lead to long exhaust gases pathes so that overcooling of the gases shall not happen. Anyway, and additional thermal management system located at the exit of the ogive is installed to maintain the gases temperature to the desire threshold. This system is based on an oil-gases heat exchanger, with a dedicated oil network with a temperature regulated to 190°C – Water or Unburnt hydrocarbons condensation. 3.6 Control and acquisition system AKIRA technologies has developed a complete turnkey measurement device. In addition to the mechanical constraint and implemented solutions presented above, an embedded software and electrical hardware system dedicated to control and acquisition is developed. A modular sensors architecture is proposed based on a collection of data acquisition cards. The control software ensures: Actuators control Acquisition and signal treatments Monitoring/safety of the device Recording of the datas in specific files Specific computations [especially combustion efficiency based on gas composition measurements] Interface with user and on site system [network, etc..] Figure 8: Acquisition device. All the software has been developed using LabVIEW© In continuous mode, the data recording is performed every degree, from a finite and absolute angular position value. Measurements consist in : 10 temperatures from thermocouples that can be recorded at a maximum frequency of 20ksamples/s. Gas concentration [CO2, CO, NO, NOx, O2, UHC (unburnt hydrocarbons) Additional measurements as instantaneous pressure [5 maximum] and lightning intensity at a maximum rate of 20ksamples/s are included in the acquisition system. Those signals are coming from additional sensors not included in the presented module. In the step-by-step mode, an adjustable delay fixed by the user is implemented. In order to avoid any electrical disturbance on the signals, the data acquisition cards are located in a dedicated frame and linked to the computer with a dedicated optical connection not sensible to electrical disturbances. ETTC 2015– European Test & Telemetry Conference 5. Conclusion The measurement of exhaust gases characteristics in “on chamber” conditions is one of the key point for improvement of combustion process analysis and improvement. The environmental conditions in a combustion chamber are not compatible with direct measurement. Also the precision in terms of measurement itself and measurement location in very tiny environment is of first order importance for gas turbine research and development. In collaboration with its partners TURBOMECA – SAFRAN Group, AKIRA Technologies has developed a complete embedded measurement device available to allow the final user to make the analysis of combustion chambers with the required precision and robustness. To reach those measurements accuracy, a specific concept and associated mechanical breakthroughs have been developed to overcome the harsh thermal environment.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015 – European Test & Telemetry Conference Page 1 The Falcon 5X Measurement System Jean Pierre Rouby Dassault Aviation Flight Test Directorate - Istres (France) Abstract: The Falcon 5X is the latest Dassault Aviation business aircraft. Its detailed design began in 2011 and it is about to make its first flight. Three test aircraft will be used to carry out all the certification tests with first deliveries by year 2017. This paper presents the architecture of the measurement system of the Falcon 5X and its specificities as well as the methodology applied during its design and testing. Keywords: Flight Test Installation, Measurement System, digital buses, sensors, telemetry, Ethernet 1. The Falcon 5X program The Falcon 5X (or F5X) is the latest addition to the range of Dassault Aviation business aircraft. It incorporates many technological changes and new features which make it the most advanced aircraft in the range, and one of the best performing and most comfortable of business aviation:  All-new wings designed to achieve new aerodynamic efficiencies  All-new Safran-Snecma Silvercrest engines  New generation of Digital Flight Control System (DFCS) able to manage all moving surfaces including flaperons and nose wheel steering  A new Head-Up display (Combined Vision System) allowing information to be presented from both the Enhanced and Synthetic Vision Systems  3 rd generation EASy avionics suite  Bigger and more comfortable flight deck  The widest cabin in a purpose-built business jet and among the largest in business aviation  Skylight ceiling window providing natural light from above in the entryway and galley zone in forward fuselage  28 large, expansive windows providing unbeatable luminosity The Falcon 5X has a maximum range of 5,200 nautical miles at its long-range cruise speed of Mach .80. Top speed is Mach .90. Typical cruise altitudes are of 43,000 to 47,000 ft on long range missions. The maximum operating altitude is 51,000 ft. The detailed design phase started at the beginning of year 2011. The first flight of the first F5X test A/C is expected in the very next months. The objective is to obtain the certification before the end of year 2016. The corresponding flight tests will be performed mainly in Istres (France) with 3 development A/C. ETTC 2015 – European Test & Telemetry Conference Page 2 2. An overview of the F5X Measurement System 2.1 Development process On test aircraft, the measurement system (MS) of the flight test installation (FTI) includes the equipment used to collect, record and broadcast the data required for monitoring the test in the Flight Test Room (FTR) and required for post-flight analysis. These data can be collected from many sources: physical parameters measured with specialized sensors (temperatures, vibrations, pressures…), functional or FTI digital buses, video cameras, microphones. The MS core includes the pieces of equipment which actually collect, transmit and record the FTI data. The MS core doesn’t include the FTI data sources which are referred as the actual instrumentation installed on the various systems of the aircraft. During the design phases of the F5X program, the MS core has been managed in the same way as the other functional aircraft systems. In that capacity, the development methodology applied to the functional systems has also been applied to the F5X measurement system: same design documents (ICD…), same formalism required for the documents, same schedule, same design reviews (PDR, FDR and CDR) and lay-out in the digital mockup. The definition of the measurement system began with the detailed design phase of the A/C, at the beginning of the year 2011, leading in parallel:  The collection of the need for each of some 40 functional systems to be instrumented and writing interface specification documents (FTI ICD's) with industrial partners  The beginning of the design of the core of the measurement system and validation testing of new equipment and architectural principles 2.2 Requirements from test engineers We mention below a list of initial requirements specified by our test engineers taken into account during MS core design:  Capacity to record the FTI data during 10 hours in flight  Capacity to transmit FTI data and one Video signal by telemetry during standard test flights, and FTI data through SATCOM in case of remote test flight  Ability to deal with the following input: o A large number of Analog sensors :  ~1200 on F5X#1  ~700 on F5X#2  ~400 on F5X#3 o About 80 ARINC buses o Honeywell EPIC eASCB avionics bus: full acquisition for post flight analysis and limited set of parameters for telemetry (TM) o Ethernet buses from various systems (DFCS, Electrical Generation, Maintenance System, IPPS instrumentation) o Cockpit Audio (to be recorded with videos and transmitted by TM)  Ability to process about 20 video streams (FTI cameras and 4 avionics display units video output): o To be recorded on board o To be transmitted by TM and displayed in FTR o To be displayed in the cockpit and in the cabin for the crew  Have an embedded test monitoring station for a flight test engineer (FTE) in order to have the same features as in the FTR  Have in the cockpit 2 FTI color video displays (Pilot & Copilot) to display, in real-time, FTI data synoptic and videos  Precise time correlation for the acquisition of all the parameters 2.3 Technical choices The Falcon 5X program was an opportunity to make technical choices to modernize the measurement system of our future civil aircraft. This concerns in particular the Falcon 8X program whose development A/C have a measurement system with architecture identical to that of F5X. The main changes introduced on F5X and F8X are the following:  MS Core based on Ethernet technologies  Data Acquisition Units (DAU’s) synchronization system based on the use of PTP v2 time protocol  Analog DAU’s with Ethernet link for configuration, real-time data output and clock synchronization  New generation time server able to provide NTP and PTP v2 messages and IRIG-B signals  New generation data recorder  Complete redesign of the video acquisition and recording system based on digital technologies  For the crew, independent FTI displays (also called VIP displays), with touch panel, able to manage selectable data synoptic or video stream  FTI remote command in the FTR available to the Test Engineer to reduce the workload of the crew 2.4 Measurement system description A synoptic of the F5X MS architecture is given at the end of this paper. The F5X MS consists of the following sub-assemblies: ETTC 2015 – European Test & Telemetry Conference Page 3  DACQNET is responsible for the acquisition and recording of analog sensors and digital buses. It is also responsible for the transmission of a portion of this information by telemetry. It also integrates the GPS receiver with its own antenna and the time server  VIDEONET is responsible for the acquisition, recording and transmission of all video streams. These streams are transmitted to the INFONET subset and by TM to the FTR  INFONET is the embedded computer system capable of providing the services of a FTR monitoring station for a flight test engineer (FTE) and supply the information to be displayed on the Pilot & Copilot VIP displays installed in the cockpit  The telemetry (TM) system is used to transmit data and a video image in real-time to the FTR. It uses the S-BAND and SATCOM  The MS control system consists of the control panels in the cockpit and in the cabin and a remote control in the flight test room  Sensors & Buses : All analog sensors added in the A/C to perform the requested measures and taps added to acquire in safety functional digital buses and aircraft systems sensors 3- DACQNET – Sensor and data buses acquisition DACQNET is the main subset in the MS responsible for the acquisition and recording of analog sensors and data buses. It is also responsible for building and sending the real-time PCM message sent by telemetry to the FTR. Figure 3.1 F5X#1 DACQNET rack It consists primarily of acquisition boxes connected to an Ethernet network. Equipment Manufacturer Function KAM-500 Acracontrol, Ir Analog sensors DAU SARI-NG ADAS, Fr Analog sensors DAU DataTap-10 ICS,U.S.A eASCB Avionic bus DAU DIANE AMESYS, Fr Digital buses (Ethernet, Arinc & Serial) DAU PCM telemetry output MDR Zodiac Data Recording MAR-1040 Eth. switch Hirschmann Ethernet data streams distribution Table 3.1 –Main DACQNET equipment All the acquisition boxes transmit collected data in the form of Ethernet UDP/IP streams. These boxes are all connected on Ethernet switches whose main role is to direct each of these streams to the data recorder and/or DIANE boxes which select parameters to be included in the TM PCM message. All the Ethernet flows pass through the DACQNET Ethernet switches. The number of SARI-NG and ACRA DAU’s, as well as the number of Ethernet switches, varies according to the considered test A/C, depending on the number of analog sensors to be processed, as shown below. F5X#1 F5X#2 F5X#1 Analog sensors ~1200 ~700 ~400 SARI-NG 25 10 7 ACRA 3 2 1 DataTap-10 1 1 1 DIANE 2 2 2 MDR 1 1 1 MAR-1040 4 3 2 Table 3.2 – MS boxes distribution A DAU data stream may contain data to monitor during the flight. It is then routed to both a DIANE box and to the MDR recorder. When this is not the case, the flow is routed only to the recorder. The DataTap-10 (ICS, USA) makes it possible to collect avionics parameters on the main eASCB buses (Pilot and Copilot). First of all, it produces an Ethernet stream containing the entire eASCB traffic which is only recorded. It also produces an Ethernet stream containing a selective list of parameters that is inserted by DIANE box into the TM PCM message. DIANE boxes are primarily responsible for building the TM PCM message. They therefore receive all streams containing parameters to be monitored in the FTR. These Ethernet streams come from MS DAU’s but they can be also Ethernet streams generated by functional aircraft systems (DFCS, Power Generation, and Multipurpose Maintenance System). The ARINC buses and serial links (RS232, RS422 …) are directly connected to the DIANE boxes which collect real-time parameters. They are then fully duplicated on an Ethernet output (called HD-Channel) which is connected to a DACQNET switches and routed to the recorder. All the DAU’s are synchronized to the UTC time extracted from the GPS message and distributed in various forms (PTP V2 and IRIG-B in particular) by a LANTIME M600 time server. This allows "à la source" data time stamping. ETTC 2015 – European Test & Telemetry Conference Page 4 Details on the main Ethernet MS network and synchronization system will be found further in this paper. 4 – VIDEONET – Video acquisition, recording and broadcasting VIDEONET is the subset responsible for the acquisition, recording and broadcasting of all video streams. It consists of entirely new equipment developed in the context of the Falcon 5X program by TDM Company (Merignac, France). The main equipment is a rack which integrates an Ethernet switch, a Video encoder and a Video server for processing up to 20 Video streams. The encoder consists of a processor board and up to 5 SDI acquisition boards with 4 input channels each. It also performs broadcasting to the server and to external devices in low or high resolution. The server builds the video files which are recorded on a NAS for post flight analysis. Figure 4.1 – TDM video rack At any time, one of the video streams is available for the FTR and sent by telemetry thanks to an ETH/PAL converter also provided by TDM Company. This device performs the conversion of an RTP stream to a PAL video signal. The corresponding input video stream is selected from the FTR by the test engineer with a remote control. The code sent by the remote control activates a combination of 5 input discreet on the TDM ETH/PAL converter. Video flows are also broadcast to the INFONET subset, allowing their display on the VIP screens in the cockpit. The video streams come first from FTI cameras positioned to film various parts of the aircraft (activity in the cockpit, wings, landing gears...). These cameras are MRCC from ADIIS Company. VIDEONET is also responsible for acquiring the information displayed to the crew on 4 MDU and PDU avionics displays. It is then to acquire the DVI output of 4 avionics graphics modules (AGM) through 4 DVI/SDI converters also developed by TDM. These converters can select the active output from the 2 outputs of the corresponding AGM. The encoder of the video rack is able to produce in real- time each video stream in the form of HR and LR (High/Low Resolution) Ethernet RTP stream. For reason of lower latency, the LR flows are used in TM for restitution in the FTR and by INFONET for display in the cockpit for the crew. The HR flows are used by the server to build the video files recorded on the NAS. 5 –INFONET– Embedded computing The MS embedded computer system has 2 functions: • provide the services of a FTR monitoring station for a Flight Test Engineer inside the cabin • control the Pilot and Co-Pilot FTI displays installed in the cockpit to allow the display of synoptic with real-time FTI parameters and FTI video It consists of a group of four computers organized around a MAR-1040 Ethernet switch (same model used in DACQNET). One of these computers analyses in real time the TM PCM message output by DIANE box and broadcasts the decoded parameters to the 3 other computers which are only used for display management. The first computer manages the display screen of the FTE. Each of the two others manages a VIP display. Figure 5.1 –Videonet/Infonet Rack The input of INFONET subset is: • TM PCM message generated by DIANE master box (duplication of the PCM message sent to the FTR) • Low-resolution Ethernet Video streams produced by the TDM Video encoder The application software tools available in INFONET for flight monitoring are exactly the same as those used in the FTR. The FTE can thus use all the synoptic set for the FTR and follow the test in the same way. It also has the same remote control keyboard. The embedded test monitoring station is therefore equivalent to a single station FTR. The pilot and co-pilot have independent VIP display running in Video mode or FTI data mode. The VIP screen is a commercial off-the-shelf 7’’ color video screen with a touch panel. Its controls (ON/OFF, brightness) have been adapted for use in the cockpit. Figure 5.2 –VIP displays position ETTC 2015 – European Test & Telemetry Conference Page 5 In Video mode, the crew can select, through a dedicated menu, one of the video channels managed by the TDM encoder. In Data display mode, the crew can select, through the “Synoptic” menu, to a set of synoptic whose ergonomics is adapted for use on a limited screen. The VIP displays are independent each other and controlled by a dedicated computer. The pilot can for example choose to display the image of a camera while the copilot is monitoring individual FTI parameters on the corresponding synoptic. Note that the output of a VIP PC can also be sent to a functional display unit of the EASy III system. The control of the display is then performed with a dedicated USB trackball which replaces the touch panel. This operating mode can for example be used during acoustic certification tests for displaying a Real-Time guidance page. 6- Routing the Ethernet data flows The MS has to deal with a large number of Ethernet flows (coming from FTI DAU’s or from external systems). Every stream must be directed to the recorder (in any case), and to DIANE boxes when it contains data to be monitored in the FTR. They are mostly UDP/IP streams using Multicast addressing mode. This is always true for FTI data coming from MS DAU’s (SARI-NG, ACRA, DIANE). The Multicast mode facilitates the switching of flows within an Ethernet switch. It’s used in conjunction with static routing tables for associating a Multicast address to a set of physical output ports of the switch. These ports are the ones that are connected to the acquisition channels of the recorder and of the DIANE boxes. The principle of routing Multicast frames by the switches internal static tables is used both within DACQNET and INFONET subset. In the case of the A/C Multipurpose Maintenance System (MMS), all the frames are basically sent in Unicast to a functional recorder. MMS frames are sent to the MS via the mirroring ports of the MMS Ethernet switches. In this case, frames are routed in a DACQNET switch through dedicated VLAN’s. Flows distribution on the various acquisition channels of the recorder and DIANE boxes has been adjusted gradually during the design of the MS with the constant concern of balancing the load on the various acquisition channels. 7 – MS Equipment synchronization Many MS equipment are synchronized with the PTP v2 network protocol. This is the case for example for SARI- NG and ACRA DAU’s. This is also true for INFONET computers. Other pieces of equipment, sometimes less recent, do not have this feature and must be synchronized with an IRIG-B signal. This is the case for DIANE boxes and MDR recorder. Furthermore, the TDM Video rack is synchronized through NTP network protocol. The time server used is a Meinberg LANTIME M600. It can handle all PTP v2, NTP and IRIG-B protocols. It synchronizes itself on the UTC time through a GPS antenna dedicated to the MS. Hirschmann MAR-1040 are PTP v2 switches that are configured in Boundary Clock mode and therefore play the role of PTP Grand Master for the DAU’s connected to them (SARI-NG, ACRA mainly). KN-Systems AES is new equipment, specially developed to allow control of the good timing of all MS boxes. It uses as input the LANTIME PPS signal (as the reference) and the PPS signal of every DAU’s (SARI-NG, ACRA, DIANE). It verifies that the maximum difference between the various DAU’s PPS and the reference is included in a programmable time interval (50 usec currently) to decide the right synchronization of the whole. The global and detailed synchronization status is indicated through LEDs on the equipment and also transmitted in a digital message decoded and displayed in the FTR. This box is very helpful to the FTI engineer during the initialization phase of the MS and often watched during the flight in the FTR. 8 - Control System and Telemetry Telemetry allows receiving in real time, in the FTR, stream of FTI parameters (sensors or data taken from digital buses) and a video stream among those managed by the VIDEONET sub-system. Telemetry uses S-band and the PCM message is transmitted at 1.4 Mbps. The PCM message can also be sent via SATCOM. We then use an additional box that converts the PCM message into an Ethernet UDP/IP stream. In the FTR, the remote control of the test engineer allows in particular, the following: • Selecting the Video stream sent by TM • Choosing the DIANE acquisition program (P1/P2) • Resetting SARI-NG clusters • Resetting DIANE boxes Some control panels are available in the cockpit, allowing the crew the following: • Turning ON/OFF the Measuring System general power • Turning ON/OFF the VIDEONET power • Start/Stop the video recording • Resetting vital equipment (Diane boxes & Flutter SARI- NG cluster) Finally, the FTI engineer also has a control panel in the cabin which allows MS equipment configuring and the check of the smooth functioning of the whole before flight departure. ETTC 2015 – European Test & Telemetry Conference Page 6 9 – Progress and Prospects This new measurement system was designed and developed in about four years. A laboratory test rig has been specially set up to test as soon as possible the new principles of architecture and operating modes, concerning in particular the networking and the global synchronization process. An MS of the same type has already been used for several months on 2 Falcon 8X test aircraft and gives complete satisfaction. It has also been used operationally during ground tests and engine runs on the Falcon 5X#1. We now look forward to the first flight of the F5X#1 for its final validation. 10. Glossary A/C Aircraft AGM Advanced Graphic Module CDR Critical Design Review DAU Data Acquisition Unit DFCS Digital Flight Control System FDR Final Design Review FTE Flight Test Engineer FTI Flight Test Installation FTI core Equivalent to MS core FTM Flight Test Measurement System FTR Flight Test Room GPS Global Positioning System ICD Interface Control Document IP Internet protocol IPPS Integrated Power Plant System IRF Interface de régulation FADEC (IPPS FTI device) MDU Multifunction Display Unit MMS Multipurpose Maintenance System MS Measuring system MS core Equipment to collect, record and transmit FTI data NAS Network Attached Storage NTP Network Time Protocol PCM Pulse Code Modulation PDR Preliminary Design Review PDU Primary Display Unit PPS Pulse Per Second PTP Precision Time Protocol RTP Real-Time Protocol TM Telemetry TS Transport Stream UDP User Datagram Protocol UTC Universal Time Coordinated VIP Very Important Parameters (Pilot/Copilot FTI display) ETTC 2015 – European Test & Telemetry Conference Page 7 11-Annex Figure 11.1 – The F5X#1 Measurement System - General architecture

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

High speed development of a temperature remote acquisition system to reduce instrumentation heat sink in an aircraft engine - Jean-Christophe Combier - AIRBUS Operation SAS – France. For new aircraft certification, engine contains a lot of instrumentation in a small confine area. New high performance engine tend to big external volume but less empty slot for instrumentation. Even if acquisition systems are more powerful, compact and generic, the heat sink from these systems is too high for device around and thermal aspect become a major issue. For the A320 NEO engine, we had 7 months to produce a new solution: A very low consumption device for the major measurement (Thermocouple) with no impact on the definition. The selected solution had been based on: - A partnership selected on his technical skill, technology watch and his agility to drive this kind of subject, - A system based on the best mass market component, - A real-time decision in AIRBUS or with the supplier, to manage the risks, - Demonstrator develops early in the development to derisk all technical topics in the first month. - Flexibility from power (28V DC or power over Ethernet) with 2 kinds of output (ARINC429 or IENA Ethernet packet) and +/-2°C of global uncertainties upon -55°C to +105°C. The result was 64 channels thermocouples, compliant with the mechanical and electrical definition from the generic acquisition system, going from 70W to less than 2W.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

P a g e 1 | 6 Combining GPS based Precise Timing and Accurate Navigation requirements, benefits Emmanuel Sicsik-Paré (1) , Gilles Boime (1) , John Fischer (1) (1) Spectracom – Les Ulis, France ; Rochester NY, USA Emmanuel.sicsik-pare@spectracom.orolia.com Gilles.boime@spectracom.orolia.com John.fischer@spectracom.orolia.com 1 Astract It has been a continuous trend that all aerospace and payload programs require more and more parameters to be measured during flight tests, at increasing sampling rates. Contextual data, associated with measurements – like timestamp, geolocation, attitude - are instrumental to performing a relevant analysis of measured data. Therefore flight test teams, in charge of engineering high speed measurement systems, need to ensure proper time alignment amongst all on-board systems, facing two challenges: - Distribute precise time (better than 1 µs), even in case of GPS loss, over the whole test mission duration - Distribute precise position and attitude, which are time-stamped consistently with the distributed time In this paper, we demonstrate how a combined position & attitude measurement sensor and precise time server can meet all “Positioning, Navigation and Timing (PNT)” needs for a complex observation payload like an on-board test system and a SAR imagery radar, with benefits in terms of architecture simplification, and overall performance increase in terms of time and attitude accuracy. We also review the benefits of associating a high stability clock with a GPS receiver, in terms of improvement of some GPS reception performances. Keywords: Position Navigation and Timing, GPS performance improvement, Interference Detection and Mitigation. 2 Introduction Position / attitude measurement instruments on one hand, time and frequency generation instruments on the other hand, have been traditionally very distinctive products or solutions, handled by different teams and specialists within companies or research institutions. As a “time based” positioning system, Global Navigation Satellite System (GNSS) in essence provides simultaneously a clock reference and the means to elaborate a position solution. GNSS generalized usage therefore introduces a dramatic opportunity for gathering together positioning and timing techniques. Thanks to the on-going miniaturization of all associated components (GNSS receiver, high performance frequency oscillator, inertial measurement unit), it is possible to integrate both position / attitude measurement and time / frequency generation functions in a single instrument, using GNSS signal as the common reference. On the timing side, the use of a high performance oscillator disciplined by GNSS combines the high short term frequency stability from the oscillator with the high long term stability from GNSS. It allows also to deliver low phase noise frequency signals which are important in many radio communications and radar applications. The implementation of network timing protocols, like NTP or PTP (IEEE-1588) provides an elegant way to transfer precise time through an IP network, thus avoiding the need for dedicated media (like IRIG). In addition, the timing distribution is immune to temporary GNSS loss, as the frequency oscillator, in holdover mode, is used to maintain the local timescale, with some long term drift - which is actually a key performance feature for the oscillator. On the position and attitude side, tight coupling between GNSS (in standalone or in differential correction mode) and IMU, also combines the short term “stability” from the IMU with the absolute accuracy from the GNSS reception, in order to provide navigation solutions that include all the parameters of interest: position, orientation, speed, rotation rate, acceleration, etc.. Temporary loss of GNSS is handled by the IMU, which maintains navigation solutions in dead reckoning mode – with a drift depending on the IMU performance. High dynamics can be captured by using a high sampling rate from the IMU. Such a combined approach provides benefits in terms of Size, Weight and Power (SWAP) as applications requiring both position/attitude and timing can access them through a single instrument, single antenna solution avoiding discrepancy resulting from separate sources. 3 Examples of applications requiring PNT In order to illustrate how a single instrument can efficiently provide all critical Position, Attitude and Timing information, we chose two applications, one in the Intelligence Surveillance Reconnaissance (ISR) area, the other one in the flying test bench area. P a g e 2 | 6 3.1 On-board test bench New aircraft or modernization programs require more and more data to be recorded and analyzed in view of qualification and certification. Distributed sensors are operating at increasing sample rates in order to capture transient phenomena or high frequency vibrations. Those data must be acquired and recorded in real time, with IP network topology being well adapted to cope with such large streams of data. Along with measurement data, contextual data are needed in order to perform relevant analysis. Timestamps provide time alignment of samples and allow to correlate measurements made by different sensors at the exact same time. Position, attitude (relative to the body frame), speed, and acceleration measurements can be used to directly determine relationships between the measured data and some of the flight envelop parameters. 3.1.1 Typical architecture PNT Sensor Sensor Proxy IP Recorder Legacy recorder IRIG B 1 PPS NTP, PTP Master On board IP LAN sensors NTP client PTP slave Sensor Fig 1: Typical timing architecture for an on-board timing system In a typical on-board flight test system, the measurement data are time-stamped by the recorder. The recorder timescale and clock is itself disciplined thanks to 1 pps and / or IRIG B signals for legacy recorders, and thanks to a Network Timing Protocol (NTP) client or Precise Timing Protocol (PTP - IEEE1588 v1 & v2) slave for recent recorders. The iNET standard recommends the use of PTP protocol, as the way to transfer precise time on an Ethernet network from a master clock to a slave clock, thanks to the exchange of PTP messages that contains ingress and egress message timestamps, allowing the PTP slave to adjust its clock to synchronize with the PTP master clock. Required Position and Navigation (PN) data are stored within the recorder along other sensor data, but at a lower rate (typically 1 to 100 Hz). 3.1.2 Timing and positioning requirement The time accuracy required for data time-stamping depends on the sampling frequency, but typically ranges from 100 ns to 10 ms. The appropriate techniques for time transfer can be summarized as below: Required time transfer accuracy Appropriate time transfer method 1 – 10 ms NTP (network) IRIG B AM (dedicated media) 10 µs - 1 ms PTP (network) IRIG B DCLS (dedicated media) 100 ns – 10 µs 1 pps (dedicated media) Fig 2 : time transfer methods according to required time-transfer accuracy Position and Navigation data are strongly application dependent. However, attitude measurement is a common requirement with heading accuracy ranging between 0.1° and 1°. 3.2 Synthetic Aperture Radar for imagery All recent analysis confirm the need for ISR capabilities, whether it’s on land, air, sea, and space. Synthetic Aperture Radar (SAR) is now a mature technology that provides imagery of vast ground or maritime zones along the trajectory of the vehicle. It’s an interesting complement to optical observation, thanks to its capability to see camouflaged objects through any weather. P a g e 3 | 6 Fig 3: SAR imagery radar principle and SAR image 3.2.1 Key stakes In a SAR radar, a synthetic antenna is created thanks to the straight movement of the vehicle. The antenna’s virtual length is roughly equal to the traveled distance during the signal integration period. This synthetic antenna is therefore very long, resulting in very good resolution along the movement axis. Like for optical observation, the important criteria for SAR performance are related to: - Resolution: ability to resolve an object of interest within several pixels, to allow reconnaissance (and identification) - Geometric conformity: a square on the ground must be reported as a square on the SAR image (without echo migration) - Contrast: ability to distinguish between objects that have small reflectivity difference It has been shown that these key features are adversely impacted by many PNT aspects: On the time & frequency side: - Slow frequency emitter instability: generates echo migration and decreases image conformity - Phase noise increases the post-correlation spurious level and tends to decrease the contrast On the navigation measurement side, if not properly measured and compensated for: - Longitudinal position variations impact geometric conformity - Longitudinal, transverse and vertical velocity components variations affect both geometric conformity as well as resolution - Transverse and vertical accelerations impact resolution The following example from [5] provides a numerical calculation of the constraints applicable on standard deviation for position, velocity and acceleration, in order to maintain: - Geometric conformity criteria: echo shifts by less than half a resolution cell - Resolution criteria: size of resolution cell increases by less than 10 % For a X band lateral SAR radar ( = 3 cm), 1 m resolution, -10° elevation, 0.5 s integration time: Geometric conformity criteria Resolution criteria Longitudinal move STD on longitudinal position < 0,5 m STD on longitudinal velocity < 2 ms-1 Transverse move STD on transverse velocity < 1,6.10-2 ms-1 STD on transverse acceleration < 1,2.10-1 ms-2 Vertical move STD on vertical velocity < 8.10-2 ms-1 STD on vertical acceleration < 8.10-1 ms-2 STD: standard deviation Fig 4: Requirements on navigation data accuracy for a SAR radar In addition, for vehicles (helicopters, drones) which have significant parasitic yaw and roll movements, it is necessary to adjust the antenna steering direction, based on attitude measurements. Following radar processing, the available SAR images (strip map, or focalized map) must be properly geo-referenced. Such referencing – a classical operation in surveying – requires to know both the position and attitude of the observation vehicle with the appropriate accuracy. P a g e 4 | 6 3.2.2 Typical architecture The below diagram shows the typical (simplified) architecture of a SAR radar, and the PNT information needed by each subsystem of the radar. Fig 5: PNT requirements for a SAR radar subsystems 4 State of the art PNT instrument Geo-PNT is an all-in-one box PNT instrument that provides: 1. Time & Frequency reference - Low phase noise, high stability frequency signal, based on either Oven Compensated Quartz Oscillator (OCXO) or Chip Scale Atomic Clock (CSAC) - Configurable pulsed signals, including 1 pps and IRIG B, referenced to UTC 2. Navigation solutions (serial or LAN interface) - Position, velocity, accelerations, yaw, pitch, roll, rotation rates The Geo-PNT, with internal MEMS Inertial Measurement Unit (IMU) can be configured to work in standalone mode or in RTK mode. Accuracies are provided in Fig. 6. Horizontal / vertical position Velocity Acceleration Attitude Roll, pitch / Heading Standalone 1.5 m / 2.5 m 0.1 ms-1 0.15 ms-2 0.2 ° / 0.5° RTK 0.05 m / 0.1 m 0.02 ms-1 0.1 ms-2 0.1 ° / 0.3° Fig 6: Geo-PNT position & navigation performances 5 Improving GNSS receiver operation thanks to a high performance oscillator Having a good oscillator obviously contributes to good timing performances (short term stability, phase noise) which are required by applications like flying test bench applications, as well as radar and other ISR applications. But it also contributes to the improvement of the GPS receiver performances. GPS reception requires that the receiver’s clock aligns on the transmitted satellite clock. This alignment needs to be initialized at receiver startup, but needs also to be maintained all along receiver operation. As most GPS receiver use a poor short term stability oscillator (typically a TCXO), clock adjustment of the receiver oscillator needs to be done for each, as clock error is one of the four variables to be calculated along with the three position components (of course, if the receiver is fixed at a well-qualified position - which is often the case for timing receivers - then a single satellite allows to discipline the receiver clock). P a g e 5 | 6 Fig 7: compared ADEV for GPS, and various types of oscillators Allan Deviation (ADEV) [1] is the widely used metric for clock stability characterization over different periods of observation (Tau). The lower the ADEV, the more stable the oscillator is. Fig 8 shows the typical ADEV as communicated by various oscillator providers (Rubidium : LPFRS from Spectratime, CSAC : SA45 from Microsemi, OCXO from Rakon). In addition, it shows ADEV of GPS recovered clock, as well as GPS disciplined rubidium clock (SecureSync from Spectracom); the latter illustrates the effect of disciplining, which combines the short term stability of the rubidium, with long term stability of GPS. It can be seen that, below a Tau of 100 s, GPS recovered clock is less stable than all oscillators. For Tau higher than 100 s, GPS becomes better than an OCXO; and slightly better than CSAC. It becomes better than rubidium only for Tau higher than 20 000 s. The use of a good quality oscillator, e.g. featuring good long term stability, provides interesting options for improving some receiver features, depending on the application. 5.1 Improvement of vertical position and velocity measurement accuracy Krawinkel et al, [2] from Erdmessung Leibniz Universität Hannover, created a receiver clock model, using real ADEV measurements of a few oscillators, which was then input to an Extended Kalman Filtering, as a way to determine the influence of clock process noise in code-based GPS single point positioning. This work concluded that vertical position and vertical velocity accuracies could be improved respectively by 58 % and 66 % respectively when using a rubidium oscillator. 5.2 Integrity monitoring Bednarz et al [3] from MIT, made some laboratory measurements and observed accuracy improvement of vertical position accuracy ranging from 34 to 44% using also an atomic reference. Going further, Bednarz proposes to use the good external clock reference (instead of processing received signal to extract clock error) in order to determine the three position parameters. Then extract the clock error from pseudo-range measurement, which is a good predictor of the vertical position error. By setting a range of acceptable clock errors, it is possible to establish a Vertical Protection Level (VPL), as the main input of a clock-aided integrity monitoring mechanism. Such integrity monitoring adapts to changing atmospheric conditions or multipath or other clock sources. P a g e 6 | 6 5.3 Multipath mitigation In a similar approach, Preston et al [4] developed a CSAC clock model, allowing to solve the three geometric coordinates parameters equation, relying on the atomic clock, thanks to only three satellites in view (which is appreciable in urban canyons for example). A suddenly growing GPS recovered clock error, extracted from pseudorange measurements, probably reflects the presence of multipath. The satellite affected by multipath can then be excluded from 3D position calculation, as a multipath-mitigation mechanism. It provides an Interference Detection and Mitigation (IDM) reliable timing source to improve confidence in estimated PNT solution that are computed within autonomous computation. 5.4 Three satellites operation We have seen earlier that clocking the GPS receiver with a high stability oscillator allows to perform GPS processing limited to solving the three position parameters, with only three satellites in view. This in itself is an interesting feature for applications where only a small portion of the sky is accessible, either momentarily (aircraft manoeuvers) or permanently (masked GPS antenna, canyons). 6 Conclusion In this paper, we have illustrated how applications like airborne radar surveillance, and test benches require accurate time and frequency as well as navigation data to be provided to their various sub-systems. System performances depend (amongst others) on the accuracy of PNT data. With Geo-PNT, Spectracom offers a solution which combines both timing & navigation within a single enclosure, easing the integration of this function as well as optimizing its Size Weight and Power. In addition, combining a GPS receiver with a high stability oscillator can contribute to the improvement of GPS reception and IDM mechanism. Depending on application requirements, a high performance external clock – OCXO, Rubidium or CSAC - can be used to either increase vertical position and velocity accuracy, to implement clock based integrity monitoring mechanism – including multipath mitigation, or simply to increase GPS reception reliability when a limited number of satellites can be viewed. 7 Acknowledgment The present study has been solely possible thanks to great material provided by Geodetics Inc. and warm advice of Dr. Jeffrey A. Fayman, Vice President of Geodetics Inc. 8 References [1] D. Allan, “Time and Frequency (The Domaine) characterization , Estimation and Prediction of Precision Clocks and Oscillators”, vol 34, pp647-654, 1987 [2] T Krawinkel and S. Schön, “Applying Miniaturized Atomic Clocks for Improved Kinematics GNSS Single Point Positioning”, proceedings of 27th International Technical Meeting of the ION Satellite Division, Tampa, Sept. 8-12, 2014 [3] Sean G. Bednarz, “Adaptative modeling of a Global Positioning System Receiver clock for integrity monitoring during precision approach”, thesis, Massachusetts Institute of Technology, 2004 [4] Sarah E. Preston and David M. Bevly, “CSAC-Aided GPS Multipath Mitigation”, Proceedings of 46th Annual PTTI, Boston, Dec.1-4 2014 [5] Jean-Philippe Hardange, Philippe Lacomme, Jean-Claude Marchais, « Radars Aéroporté et Spatiaux », Masson, Sept 1995.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Optimizing Bandwidth in an Ethernet Telemetry Stream using a UHF Uplink Gonzalez-Martin, Moises Rubio-Alvarez, Pedro (moises.gonzalez@airbus.com , pedro.r.rubio@airbus.com ) Abstract: A conventional Flight Test demands a big amount of Real Time Data Information. Nowadays is natural to send Video Signals and Data in the same telemetry Stream with a limited bandwidth. The big challenge for the Flight Test Telemetry Engineer is being able to select the set of information needed for each test because is frequent need to transmit more information than bandwidth available on the telemetry stream. Up to now, it was a regular task to define a static set of information for each group of tests. That implies rebuild the Telemetry Stream, change the Acquisition Mapping and perform the telemetry checks in order to verify all the information needed will be transmitted. The solution developed by Flight Test Spain allows choosing, On Ground and in real time, which information is needed during a Flight Test Execution and transmitting it to the Airplane dynamically by means of a UHF uplink. Whenever a request is made from On Ground it produces an On Board Telemetry Stream with the new set of information to be transmitted. Keywords: Telemetry, Uplink, FxS, Packetizer, Bandwidth, IENA. 1 Introduction The flight test community faces issues with insufficient bandwidth available to support telemetering requirements. The amount of spectrum available for aeronautical telemetry is inadequate today, and demand is growing exponentially. Aeronautical telemetry is used to transmit real-time data during flight tests, and the availability of such data is integral to the productivity and safety of live flight test programs. Enough telemetry bandwidth is critical to maintaining rigorous system testing. Each country manages its own electromagnetic regulations. In some cases Technology research initiatives offer the prospect of increasing the bandwidth efficiency and, if they reach their intended capability, may partially offset telemetry spectrum demand until more spectrum access may be secured. 2 Technical Background In recent years there has been a shift from proprietary and closed solutions for Flight Test Instrumentation (FTI) networks towards more open standards-based systems using Ethernet technology. The trend towards Ethernet is further driven by the CTEIP Integrated Network Enhanced Telemetry (iNET) initiative that is pushing the adoption of Ethernet technology for the future of FTI[1]. Today there are some technologies based on Ethernet as transport protocol. Flight Test Spain is focused in these running technologies for the future Flight test Instrumentation: The concept of iNET[5] is to use internet-like architectures (Transmission Control Protocol/Internet Protocol (TCP/IP), Space Communications Protocol Standards (SCPS), and Consultative Committee on Space Data Systems (CCSDS)) to form a wireless network to supplement point-to-point telemetry capabilities. While some critical/safety data will always need a dedicated point-to-point reliable link, a significant portion of the data may be more efficiently handled by a network topology. iNET is currently in the architectural definition phase. iNET is a huge project (The economic model estimates that cost impacts of inadequate telemetry spectrum at a test range complex over a twenty year period will range from almost $23 billion) that is still in a previous phase. iNET-X[4] is the complete framework for iNET test articles that has been developed around the core recommendations and technologies outlined in the iNET standard ensuring interoperability for instrumentation networks. iNET-X extends the iNET standard ensuring high performance, network coherency, ease of setup, and management. IENA is another protocol widely used in Airbus for flight test Instrumentation. Airbus Defence & Space Flight Test Spain uses in the standard instrumentations Flight Test – Airbus Defence&Space, Avd. John Lennon s/n, 28906 Getafe (Spain) ETTC 2015– European Test & Telemetry Conference due the flexibility and the easy adaptation to Ethernet UDP protocol. 3 Ethernet acquisition and packetizer bus monitor, positive and negative aspects. Packetizer bus monitors are designed for networked data acquisition systems where the acquired data from the avionics buses is captured and re-packetized in Ethernet frames for transmission to an analysis computer or network recorder. The packetizer bus monitor encapsulates all messages on the bus and packages the message in the payload of a UDP/IP packet. The application layer contains bus identifiers, sequence numbers and timestamps. The packetizer mode has clear advantages between the parser mode because there is no need no select messages in the bus. Packetized mode is a very easy mode to program an acquisition system. No configuration error and the programming of the system is universal (no need to reprogram the system depending on the user information requirements or any change in the bus messages information). The drawback of this mode is the increase of the bandwidth in the acquisition stream due to all the messages are capture. But, as stated above, it is a great benefit not having to select any parameter during the elaboration of FTI Map. 4 FxS Protocol as parameter request concentrator FxS[2] (Flight Test Data Exchange Service), a platform independent protocol for the transmission of data between clients and servers within a local area network (LAN). The use of FxS Server-Client architecture allows to the server to know in real time what are the parameters and messages that all the users are demanding in a certain time of the flight Test. Picture 1 FxS Server Client Connection Protocol FxSIENA Server is an implementation of the FxS Server protocol and IENA[3] protocol as FTI transport layer. FxSIENA receives the IENA Data coming from the telemetry downlink and servers this information in real time to the FxS Clients connected in real time to the telemetry station. FxSIENA Server has been modified to collect all the information and send dynamically to the software responsible of Upload the parameter list through the UHF Uplink. FxSIENA Server is also responsible for the bandwidth control. The FxSIENA receives a parameter indicating the current used bandwidth and notify to user the bandwidth utilization. The protocol is based on a client server communication. When a on ground client demands a new parameter to be monitored, the new telemetry message is serialized and sent through the UHF Uplink. On Board, the UHF Receiver decommutes the frame and regenerates the list of parameters. After that the Software in the Telemetry Gateway is reconfigured to filter the new set of parameters. The telemetry message is, essentially, the list of parameters to be sent. Additionally, it can contain information of how to identify the message in the acquisition stream. Uplink Request Telemetry Result Picture 2 Communication Process On Ground On Board Transmission Period 1 sec ETTC 2015– European Test & Telemetry Conference Picture 3 Telemetry Message with acquisition information Telemetry Message can be significantly large (for a list with 10000 messages the telemetry Message is 100 Kb). For that reason, this can be compressed or partially sent using a delta mechanism. That means to send only the list of parameters to be added or removed since the current operating list. The confirmation check can be added using a standard error detecting code. For that case, has been tested using CCITT CRC-16. Data processing configuration is the same in both on ground and on board. This is due to the telemetry gateway software only filters messages inside the IENA packet maintaining the header and resizing the payload according the messages requested. Picture 4 IENA payload before and after passing through telemetry stream 5 UHF Telemetry Uplink UHF transceiver has been tested for use as telemetry uplink. The equipment incorporates a narrowband AM/UHF/VHF receiver and a FSK decoder. The demodulation is done in amplitude. It will integrate the FSK decoding circuit and restore a frame with a baud rate set to 1200 bauds. On Board the Uplink receiver can be connected to the UHF/VHF Aircraft radio. 6 Joining the chain links If the set of parameters needed during a flight test change during the flight Test, the crux of the matter is to change the telemetry stream dynamically according the real time requirements. Behind the idea there is an Acquisition System based in packetizer buses, an FxS Server/Client architecture, a UHF link for communications between Ground and On Board applications and a small process who decides which information to send through telemetry stream. Picture 5 Telemetry Gateway General Architecture 7 Proof of concept The pair receiver/transmitter was tested during the A400M Flight Tests where there was a predecessor system designed for manage predefined lists. The lists were elaborated On Ground (previous to perform the flight) with the requirements of the specialists. This way of work had some drawbacks: - Mistakes in the elaboration of the lists cannot be solved during the flight. For example, if one parameter is missing or is suddenly needed in the list, there is no possibility to add this parameter to any list On Board. - It is necessary to make a previous analysis of the parameters needed for each test point of Flight Tests. Sometimes this task can be extremely cost and difficult to perform in the analysis phase. No coverage tests were performed but the theory states the coverage range is above telemetry coverage. Once the HW part had been tested, a software test bench was developed for test the behaviour. Below there is the list of software components used in the test bench: On Board Simulation process: FTPlayer to simulate the acquisition stream (On Board) based in IENA Packets. The Telemetry Upload Message to filter (passes every ARINC429 message but Message number 2) ETTC 2015– European Test & Telemetry Conference simulation uses a recorded Flight in pcap format (packet capture). NetFxS to simulate a client to request dynamically parameters to FxS Server. FxSIENA Server receives IENA packets and sends request parameter to the client. Telemetry Gateway receives the list from the UHF receiver (using a RS232 connection) that filter the acquisition stream according the dynamic parameter list. On Ground Simulation process: NetFxS to simulate a client on ground to request dynamically parameters to FxS Server. FxSIENA Server that receives IENA packets from telemetry stream. This is a modified version that generates the list of requested parameters on ground and sends the list to the RS232 transmitter. A proof of concept has been successfully done on the lab. Test results managed to send a 100 Kb of information between transmitter and receiver for a 10 Mbits acquisition stream and 10000 parameters requested, producing up to 2 Mbits of filtered telemetry stream. 8 Next Steps Once the system has been tested in the lab, next stage is to test the system in a Flight Test Bed. Over the next year, a C295 aircraft will be equipped with the HW and software to test the complete system. The system will be improved with the following capabilities: Bandwidth control. The FxSIENA will control and notify that the maximum bandwidth limit is not exceeded. Improvement of the Uplink communication protocol (compression, retransmission and checksum). 9 Conclusions While the Flight Test community is waiting to next generation of Telemetry leaded by iNET project, Airbus Defence & Space Flight Test Spain is exploiting the maximum capabilities of the last Ethernet based technologies. With this improvement it will make possible to achieve more flight test efficiency, to have a better response time to telemetry requirements and overcoming Bandwidth constraints. 10 References [1] Toms Grace, “Telemetry of the Future” [2] Michael W. Dillard, “FXS – A bridge between Worlds”, Society of Flight Test Engineers 2004 [3] MARTIN S., “IENA & Ethernet Format Overview” Internal Airbus Group Distribution. [4] CWC-AE, White Paper “Packet header structures and payload structures for iNET-X application layer packetization protocols” [5] “iNET system architecture, version 2007.1.” Central Test and Evaluation Investment Program (CTEIP), July 2007 [6] Abdul Jabbar, Erik Perrins, James P.G. Sterbenz, “A Cross-Layered Protocol Architecture for Highly-Dynamic Multihop Airborne Telemetry Networks” 7. Acronyms FXS Flight Test Data Exchange Service LAN Local Area Network UDP User Datagram Protocol IP Internet Protocol IENA Installation d’Essai Nouveaux Avions UHF Ultra High Frequency HW Hardware FSK Frequency Shift Keying AM Amplitude Modulation VHF Very High Frequency CRC cyclic redundancy check LAN Local Area Network FTI Flight Test Instrumentation iNET-X Extended iNET TCP Transmission Control Protocol

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Flexible Switching for Flight Test Networks A. Diarmuid Collins1 1: Curtis-Wright, Dublin Abstract: The network switch is a critical element in the flight test network. All devices in the network are configured, synchronised and managed via the switch. In addition to this all acquired data is routed through the switch. For these reasons, the flight test network switch has always needed to be rugged and reliable with high throughput and simple intuitive setup. Ethernet technology and the move towards open standards within FTI systems have enabled flight test networks to become increasingly flexible and heterogeneous. Modern FTI networks may have different synchronisation and data transmission protocols running simultaneously. It is also important to quickly switch network configurations for different flight profiles and to enable new features to be easily added to existing installations. This paper examines the increasing network interoperability and flexibility challenges and discusses how the network switch is best placed to provide solutions. Keywords: Ethernet, switching, FTI, PTP, SNMP 1. Introduction In Flight Test Instrumentation (FTI) as the acquired volume of data increases, the industry is migrating from IRIG 106 chapter 4 PCM to Ethernet networks. Ethernet has a long history in the commercial and industrial markets. Since the initial definition in the 1970s to the first agreed IEEE 802.3 standard in 1983, Ethernet has grown both in commercial market size to a multiple billion dollar market and the technology has developed to be capable of transferring data rates in excess of 100Gbps. Using Ethernet in FTI networks brings a number of significant advantages: • Wide range of off-the-shelf commercial Ethernet products, from switches, network interface cards, recorders amongst other equipment • Mature standards build around Ethernet for the transmission of data, the configuration of networking equipment, synchronisation of network elements. • Wide range of software, both commercial and open sourced, for interfacing and manipulating Ethernet data and equipment. • Scalable network infrastructure and data rates from 10Mbps to 100Gbps with a future path to higher rates. FTI networks have a number of requirements that necessitate specific consideration and place constraints on Ethernet networks. Traffic on FTI networks tend to be heavily asynchronous, that is to say that the data rates and volume of traffic on an FTI network in one direction are far higher than in the opposite direction. Determinism and loss-less transmission are two highly desirable features in an FTI network. To ensure the transmission and recording of all the acquired parameters, packet loss on the network is not acceptable, regardless of the network layer or application layer protocol being used. In many commercial implementations, the reliability of the data transfer is handled by the transport layer, requiring retransmission for lost packets. Figure 1: FTI network elements 2. The Network Switch The core of an Ethernet network is the switch. An Ethernet switch may operate at one of more layers of the OSI network model [1]: • Layer 1. The lowest layer switch is known as a repeater or a hub. It is simple device which does not manage the traffic through the device. • Layer 2. A network bridge which switches Ethernet packets based on MAC addresses. • Layer 3 / 4. Commonly known as routers. Switches network traffic based on IP, TCP, UDP and application layer data. FTI network switching typically requires layer 3 and 4 switching, at a minimum where traffic is routed and ETTC 2015– European Test & Telemetry Conference switched based on UDP ports, IENA [2] and iNET-X [3] stream identifiers. 3. Switch Design COTS Ethernet switches and switch cores support a wide range of features and requirements driven by commercial Ethernet networks. Dynamic switching and self-learning of network topologies are required to support dynamic and changing networks in benign environmental conditions. FTI networks on the other hand may not require all of these features but instead need to be rugged and very reliable. Ruggedness is dictated by a number of environmental standards, specifically DO-160 [4] and MIL-STD-704 [5]. These two standards define a minimal set of environment test conditions, covering: temperature, humidity, shock, vibration, power interface to the aircraft, among others. Network switches can be implemented primarily using two approaches. Application Specific Integrated Circuits (ASIC) can be implemented to perform switching and configuration of the network switch, usually with on chip microprocessors (MCU). These MCUs run management and configuration firmware on an RTOS or even embedded OS such as Linux. ASIC development is very expensive undertaking and as a result, a very limited number of large companies such as Marvell [6] and Intel design very flexible switching products which are then sold off the shelf. OEM manufactures integrate these products, customising the firmware to implement the feature set of interest for their product. Customisation to the lower level hardware is not possible without commercial justifications in the hundreds of millions of dollars range. A second approach is implementing the switching and management functionality in Field Programmable Gate Arrays (FPGA). FPGAs have the advantage of much shorter and cheaper development cycles, with some trade off in the volume of supported features. The switch manufacturer can design the feature set of interest for their product line and exclude the unwanted functionality that the more general purpose ASIC switch cores support. For an ideal FTI switch the latter approach has significant advantages. Within the FPGA, a store and forward switch fabric can be implemented using state machine based code. Dynamic learning algorithms for routing and on board OS are not required due to the more limited set of requirements reducing the time from power up to operation, simplifying the design and consequently, increasing the reliability of the switch. The static forwarding and filtering configuration, stored in on-board non-volatile memory, allows the switch to start routing based on a pre-defined set of rules as soon as power is applied. As an example, the NET/SWI/101 from Curtiss-Wright powers-on, achieves link up and is transmitting within 2 seconds [7]. In certain FTI networks, the ability to tap an Ethernet link for monitoring purposes can be a very useful feature. Most switches can be configured to perform such functionality, however minimising the latency through the switch can be challenging. The FPGA based designs can be configured to bypass the core keeping latency to the minimum for “tap”-like performance. 4. Time Synchronisation in FTI Network Ethernet networks support a number of well-defined and supported time synchronisation protocols. The two most widely known and used are Network Time Protocol (NTP) [8] and Precision Time Protocol (PTP) 4.1. NTP NTP is a time synchronisation protocol widely used to synchronise desktop computers on packet switched networks, most famously the Internet. Sub second accuracy is possible, with simplified implementations known as SNTP also available. The accuracy is good enough for consumer applications. 4.2. PTP PTP is an IEEE standard used to synchronise clocks in a network, using similar principles to NTP. Unlike NTP, it was designed to achieve sub-microsecond accuracy. This accuracy makes it more suitable to FTI networks than NTP. The original standard was agreed in IEEE 1588- 2002 [9] and is known as PTPv1. The second revision of the standard was agreed in IEEE 1588-2008 [10], improving accuracy precision and robustness. However PTPv2 is not backward compatible with PTPv1. 4.3. PTP in FTI Networks Synchronisation of all data acquisition units in an FTI network is a key requirement. The time correlation of the data on the network is a function of the synchronisation accuracy. Clearly the time synchronisation protocol of choice is PTP. This raises the requirement for the support of a number of PTP related features in the ideal FTI switch. PTP Grandmaster In a PTP-synchronised network, one element in the network acts as the master to all the time slaves, this is the Grandmaster (GM). The GM acquires time from an external time source such as GPS, IRIG Analog and Digital or a battery backed Real Time Clock (RTC) and synchronises the slaves to this time source. With non- backward compatible standards in PTPv1 and PTPv2, support for both grandmasters is required PTP Transparency ETTC 2015– European Test & Telemetry Conference In a larger network where the switch is not a PTP grandmaster, to improve on the synchronisation accuracy, the switch should appear invisible to the PTP conversation. This is known as PTP transparency. The propagation time of the PTP packets through the switch is measured and the timestamps are adjusted accordingly, removing the propagation delay. Support for PTP transparency is required in both PTPv1 and PTPv2 modes of operation. Bridging of PTP Protocols With non-backward compatible protocols, it is not uncommon for network devices supporting either PTPv1 or PTPv2 to co-exist on the same network. Ensuring that these PTP clients are synchronised to the one time source requires the FTI switch to support the translation or bridging between the two protocols. This is a very powerful and useful feature allowing the network designer to mix clients comfortably on the one network. Figure 2: Mixed PTP clients in one network with translation Multiple Time Sources While PTP is the time synchronisation mechanism on the network, the absolute time needs to be acquired by the Grandmaster to allow for accurate absolute time synchronisation. Historically IRIG-B was a standard created by the US military defined in 1960, the latest revision of the standard published in 2004. This standard is widely supported in FTI networks both in analog and digital formats. The Global Positioning System (GPS) is another very popular space-based location and time synchronisation system. If the FTI switch supports GPS, it allows the time to be synchronized to the satellite based atomic clocks. This is a very accurate and cost effective mechanism for acquiring absolute time. 4.4. Time Synchronisation in the ideal FTI Switch As described, there is a long list of time synchronisation features that the ideal FTI switch should support, to a high level of accuracy. Acting as a PTP v1 or v2 GM, taking time sources from GPS, IRIG or free running from an on- board RTC. The bridging of PTP protocols allows the FTI engineer to define a per port PTP protocol selecting between v1 and v2. 5. Traffic Filtering in an FTI Network As previously mentioned, switches typically route data based on a certain level of the OSI model. In a level 2 switch, the Ethernet MAC address are used to automatically route traffic to ports on which that particular networking interface is connected. A level 3 or 4 Ethernet switches can route traffic based on IP address or UDP ports as an example. FTI networks are heavily asymmetric with a large number of sources but a limited number of sinks. These sinks may have very different requirements. Network recorders typically have a very large bandwidth and storage space so will generally record all the traffic on the network for later analysis and archiving. As a result the network switch will generally route all traffic to ports on which the record is connected, filtering none of the traffic. In certain applications it may also be desirable to separate certain high volume traffic to a dedicated recorder. One example of this may require all video traffic to be recorded on a dedicated recorder. Such selective switching could be implemented using a dedicated multicast IP address for video traffic. The FTI switch is then required to switch this traffic to the dedicated video recorder. Transmitters on the other hand have a very limited bandwidth but give engineers on the ground very valuable insight into key information on the FTI network. In this scenario the switch is required to filter based on very specific parameters from the Ethernet traffic. In IENA traffic, a specific stream identifier in combination with a UDP port may contain parameters of interest. The FTI switch therefore requires the ability to switch based on multiple header fields at all layers of the OSI model. ETTC 2015– European Test & Telemetry Conference Figure 3: Dynamic routing in a configurable crossbar An on-board data processing unit connected to the network would have similar requirements to the transmitter, in that it could only process a subset of the traffic during the flight. However in addition to filtering the traffic, the required switching configuration could change during the flight to allow the engineer to perform analysis at different phase of the flight. For example on take-off, the switch could be configured to pass iNET-X stream identifiers in the range 0x1 to 0xF to the data processing unit, then at altitude, filter this traffic, allowing all traffic on UDP port 4444. The FTI switch therefore, should have a rich set of filtering and switching functionality built into the switch core. A standard set of switching based on layer 2 and 3 header fields should be supported. In addition to this, it is desirable that custom switching rules can be implemented at the application layer. Even more powerfully, the engineer could define fields within the payload of the packet and filter based on these values. This level of flexibility results in a very powerful switch. All these filters should be stored on the switch in non-volatile memory, in an efficient lookup table to allow the traffic to be filtered at line speed through the switch, avoiding any bottlenecks in the data path. 6. Configuration of Network Switches With the expanding features and configuration options available on FTI switches, ease of configuration is more important than ever. The configuration of such devices can be implemented either proprietary configuration software or open standards which have been adopted for such purposes. TFTP [11] is a file transfer protocol that has been developed specifically for light weight file transfer. The server can be implemented with a minimum of CPU and RAM requirements making it suitable for embedded devices. For the transfer of large configuration binary files, it is a widely adopted protocol used on switches. Simple Network Management Protocol (SNMP) [12] is an open internet standard for managing devices on an IP network. It is typically used to monitor and configure switches and recorders. It is a self-documenting protocol that is used to configure smaller volumes of configuration data. Off the shelf SNMP managers are widely available for Windows, Linux and OSX operating systems which can then be used to manage the networked devices. This configuration phase itself can be split into two distinct phases, dynamic, on the fly configuration and static configuration prior to acquisition. FTI network topologies are generally relatively static and as a result prior to flight, these networks can be defined and the switches configured with the routing and filtering tables. With a broad range of options available, the configuration at this point can be significant with settings for different filtering options to be setup for a number of phases of the flight. The configuration would ideally be stored in a local setup file on the engineers PC, to make iterative changes to the network configuration simple. An example of such a file format is XidML (eXtensible Instrumentation Definition Markup Language). [13] Figure 4: Simple crossbar configuration Once this configuration has been completed, programmed and stored locally the switch is configured and ready for flight. Once in flight, this static configuration will not be modified, however now the dynamic configuration aspect takes place. The FTI engineer may want to monitor network traffic, link status and the health of the network as well as switch, between the various phases of the flight. SNMP managers running on PCs connected to the network can select between the various configurations that were pre-configured, in a seamless manner, with little or no packet loss. Monitoring using an SNMP manager allows the FTI engineer to query the health of the network switch however, it can be useful to have automatic or passive health reporting. Such a facility would allow the switch to periodically report on various metrics in a status packet. This status packet could easily be telemetered to the ground as well as recorded. The advantage of such approach is that a query/response mechanism is not ETTC 2015– European Test & Telemetry Conference required, which in many telemetry links is not possible, and the information density of a status packet makes it a very efficient use of the limited telemetry bandwidth. An example FTI switch, the NET/SWI/101 has 254 configurable filters per “mode”, which can filter traffic based on 8x16bits fields in both the header and payload of the Ethernet traffic. Each “mode” represents a selectable mode of operation, of which there are 16. This allows the FTI to design a very powerful set of configurations for their network and dynamically switch between pre- defined configurations using SNMP. 7. Future-Proofing Network Switches Over time, as the size and complexity of airborne networks continues to increase, the demand for additional features and performance upgrades continues. Some of these upgrades will require replacing existing hardware in the instrumented airplane, however there is significant scope for FPGA-based designs to incrementally upgrade the programmed “firmware” of the FPGA. This mechanism allows the user to remotely upgrade the feature set of the switch without physically removing or even accessing the switch. The upgrade process can be implemented in a similar mechanism to the static programming of the device over the Ethernet interfaces using TFTP. Such an upgrade could feasibly be executed in minutes between flight tests, if the demand arose. Naturally, such a process has potential to be interrupted so significant effort and measures need to be taken to ensure that the process cannot result in a non-working or unusable switch. “Fall- back” firmware images are untouched on the switch to ensure any interruptions in the programming cycle do not result in a non-working switch. 8. Conclusion FTI network switches, which sharing some commonality with COTS Ethernet switches have specific demands of their own. The relatively static nature of FTI networks in combination with stringent reliability and rugged requirements places specific demands that many switches cannot meet. Flexible switching and filtering requirements, advanced time synchronisation mechanism and rugged, deterministic and scalable performance are currently met by the Curtiss-Wright NET/SWI/101 airborne switch, among others. 9. References [1] ISO/IEC, “Open Systems Interconnection”. [2] F. Abadie, “A380 IENA Flight Test Installation Architecture,” ETTC, 2005. [3] Curtiss-Wright, “iNET-X Packet header structures and payload structures”. [4] Radio Technical Commission for Aeronautics (RTCA), “DO-160F Environmental Conditions and Test Procedures for Airborne Equipment,” 2007. [5] Department of Defense, “Aircraft Electric Power Characteristics”. [6] Marvell, “Marvell Prestera 98DX2101,” [Online]. Available: http://www.marvell.com/switching/assets/Marvell_Pre stera_98DX21xx-41xx-001_product_brief.pdf. [7] Curtiss-Wright, “NET/SWI/101 Datasheet,” [Online]. Available: http://www.cwc- ae.com/product/netswi101. [8] Internet Engineering Task Force, “Network Time Protocol Version 4: Protocol and Algorithms Specification,” [Online]. Available: http://tools.ietf.org/html/rfc5905. [9] 1588_WG - Precise Networked Clock Synchronization Working Group, “1588-2002 - IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems”. [1 0] 1588_WG - Precise Networked Clock Synchronization Working Group, “1588-2008 - IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems”. [1 1] IETF, “RFC1350 - THE TFTP PROTOCOL,” [Online]. Available: http://tools.ietf.org/html/rfc1350. [1 2] IETF, “ Management Information Base for Network Management (RFC 1213),” [Online]. Available: http://tools.ietf.org/html/rfc1213. [1 3] “Welcome to xidml.org - a website for the XidML community,” [Online]. Available: http://www.xidml.org/. 10. Glossary NTP Network Time Protocol PTP Precision Time Protocol FTI Flight Test Instrumentation ASIC Application Specific Integrated Circuit FPGA Field Programmable Gate Array TFTP Trivial File Transfer Protocol COTS Commercial Off The Shelf CPU Central Processing Unit RAM Random Access Memory GM Grand-Master RTC Real Time Clock SNMP Simple Network Management Protocol iNET Integrated Network Enhanced Telemetry iNET-X iNET-Extended

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Evolving Embedded Electronics Testing in HIL Simulation and Large- Scale Test Cells Through Sub-ns synchronization systems via Time Sensitive Networks in Ethernet Veggeberg, Kurt1, Daurelles, Olivier2 1: National Instruments, 11500 N. Mopac Expwy, Austin, TX 78759 USA 2: National Instruments France, 2 rue Hennape 92735 France Abstract: Time Sensitive Networks in Ethernet are improving computer based measurements from sub- microsecond to sub-nanosecond precision. By using message based protocols that synchronize clocks, it is possible to generate this required precision signaling locally. This solution is scalable from within the same node to multiple nodes throughout the world. To fulfill the more demanding needs of test & measurement applications, IEEE 1588 (PTP) has been developed, which is able to provide sub-microsecond performance. Much better performance is becoming possible. Keywords: Ethernet, IEEE-1588.2, IEEE 802.1, White Rabbit, CERN 1. Introduction Many of the research activities concerning IEEE 1588 have been targeted at Ethernet. Already, CERN is able to achieve sub-ns synchronization in the timing, trigger and control system for the Large Hadron Collider (LHC), the biggest machine in the world based on Ethernet. This is an example of what Time Sensitive Networks and standards such as IEEE-1588, IEEE 802.1 will be bringing to COTS Ethernet. This allows time sensitive and best data effort to coexist. The key ideas of the White Rabbit (WR) technology used by CERN can be adapted and included in the next revision of PTP (Precision Time Protocol)enabling the standard compliant PTP devices to achieve high accuracy synchronization using methods prototyped and tested in WR.. 2. Time Sensitive Networks (TSN) are revolutionizing monitoring, control & test applications Measurement and automation systems involving multiple devices often require accurate timing in order to facilitate event synchronization and data correlation. To achieve this synchronization, devices in the system must either have direct access to timing signals from a common source, or the devices must synchronize their individual clocks in order to share a common time base. There are advantages and disadvantages to both methods of device synchronization. In a time based synchronization scheme for data acquisition, we still have to provide the same timing signals as in a signal based scheme, but the way in which we do so is different. All of the timing signals, triggers and clock, are based off of a common time reference. Examples of time references are GPS, IRIG-B, 802.1 and IEEE-1588. Devices on the network, including switches and routers, can be synchronized very precisely via the IEEE 1588 and IEEE 802.1 "precision time protocol" standards. Some examples of applications where this type of synchronization can prove extremely valuable are in both large, distributed hardware-in-the-loop (HIL) test systems and test cell measurement systems, especially in industries like aerospace and defense. Often times, an ‘iron bird’ is created where the full electronics inside a plane are tested by simulating the environment to make them think they are actually in an operating plane. When performing this type of HIL test, the large electronic system of the plane can be broken down into several different components representing separate subsystems performing different functions. For example, a plane’s flaps, slats and rudder engines can be treated separately and in order to provide accurate test results, distributed processing power over the area of the plane may be required. This increase in computational power can then enable efficient testing by aiding in the execution of large, complex simulation models, but these architectures may also be needed to allow for testing of high-channel count systems in application areas like structural tests of wings. Depending upon how the system is set up, it can be necessary to provide shared trigger and timing signals between nodes while also requiring deterministic data sharing. Each subsystem may generate a local reference clock signal (local to the subsystem), Which may be aligned and locked With respect to one or more similar respective reference clock signals of other subsystems, via a high- level precision time protocol (PTP) such as IEEE-l588 or a global positioning system (GPS) protocol. For instrumentation systems, each DAQ card (i.e. device) within a given subsystem may generate a local sample clock (local to the DAQ card) based on the local reference signal, and generate a local trigger clock (local to the DAQ card) based on the local sample clock. The trigger ETTC 2015– European Test & Telemetry Conference clocks may be synchronized with respect to each other, and each DAQ card may then use its trigger clock to synchronize any received trigger (or trigger pulse), resulting in received triggers being synchronized across all participating DAQ cards across all participating subsystems. Sharing a common timing signal becomes unfeasible when the distance between devices increases, or when devices frequently change location. Even at moderate distances such as 50 meters, a common timing signal may require significant costs for cabling and configuration. Up to 100 meters you can match trace length. Once you can’t, you lose sync at 1.5 ns/ft so over 200m you can lose approximately 900 ns of sync . IEEE-1588 can meet 200m through only a single switch (single subnet) with what we have today. 3. Using message based protocols allows synchronization of clocks by compensating for path delays In general, distributed measurement and control systems often require their composite parts to be aligned to the same timebase. One useful result of synchronization in these applications is the sharing of synchronized periodic signals, which can be used to take measurements at the same time or to provide known relationships between control units in a distributed environment. In these situations, distributed clock synchronization becomes necessary. Using this approach, devices act on timing signals originating from a local clock which is synchronized to the other clocks in the system. Examples of distributed clock synchronization include devices synchronized to a GPS satellite, a PC’s internal clock synchronized to an NTP time server, or a group of devices participating in the IEEE 1588 protocol. Instead of sharing timing signals directly, these devices periodically exchange information and adjust their local timing sources to match each other. The synchronization of distributed clocks requires a continuous process. A clock is essentially a two part device, consisting of a frequency source and an accumulator. In theory, if two clocks were set identically and their frequency sources ran at the exact same rate, they would remain synchronized indefinitely. In practice, however, clocks are set with limited precision, frequency sources run at slightly different rates, and rate of a frequency source changes over time and temperature. Most modern electronic clocks use a crystal oscillator as a frequency source. The frequency of a crystal oscillator varies due to initial manufacturing tolerance, temperature and pressure changes, and aging. Because of these inherent instabilities, distributed clocks must continually be synchronized to match each other in frequency and phase. By using message based protocols that synchronize clocks, we are able to generate the required signaling locally. This solution is scalable from within the same node to multiple nodes throughout the world. Devices act on timing signals originating from a local clock which is synchronized to the other clocks in the system. Examples of distributed clock synchronization include devices synchronized to a GPS satellite, a PC’s internal clock synchronized to an NTP time server, or a group of devices participating in the IEEE 1588 protocol. Instead of sharing timing signals directly, these devices periodically exchange information and adjust their local timing sources to match each other. 4. Ethernet-IEEE 1588 Synchronizaton IEEE 1588 provides a standard protocol for synchronizing clocks connected via a multicast capable network, such as Ethernet. Released as a standard in 2002, IEEE 1588 was designed to provide fault tolerant synchronization among heterogeneous networked clocks requiring little network bandwidth overhead, processing power, and administrative setup. IEEE 1588 provides this by defining a protocol known as the precision time protocol, or PTP. IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP and GPS. IEEE 1588 is designed for local systems requiring accuracies beyond those attainable using NTP. It is also designed for applications where a GPS receiver at each node is too expensive, or for which GPS signals are inaccessible. 5. Using Time-Based Synchronization Slave clocks synchronize to the 1588 grandmaster by using bidirectional multicast communication. The grandmaster clock periodically issues a packet called a ‘sync’ packet containing a timestamp of the time when the packet left the grandmaster clock. The grandmaster may also, optionally, issue a ‘follow up’ packet containing the timestamp for the ‘sync’ packet. The use of a separate ‘follow up’ packet allows the grandmaster to accurately timestamp the ‘sync’ packet on networks where the departure time of a packet cannot be known accurately beforehand. For example, the collision detection and random back off mechanism of Ethernet communication prevents the exact transmission time of a packet from being known until the packet is completely sent without a collision being detected, at which time it is impossible to alter the packet’s content. ETTC 2015– European Test & Telemetry Conference Figure 1 : Using message based protocols allows synchronization of clocks by compensating for path delays The master periodically broadcasts the current time as a message to the other clocks. Under IEEE 1588-2002 broadcasts are up to once per second. Under IEEE 1588- 2008, up to 10 per second are permitted. Each broadcast begins at time with a Sync message sent by the master to all the clocks in the domain. A clock receiving this message takes note of the local time when this message is received. By sending and receiving these synchronization packets, the slave clocks can accurately measure the offset between their local clock and the master’s clock. The slaves can then adjust their clocks by this offset to match the time of the master. The IEEE 1588 specification does not include any standard implementation for adjusting a clock; it merely provides a standard protocol for exchanging these messages, allowing devices from different manufacturers, and with different implementations to interoperate. 6. Time Based Synchronization Accuracy Results Several factors affect the synchronization levels achievable using IEEE 1588 over Ethernet. During the time between synchronization packets, the individual clocks in a system will drift apart from each other due to frequency changes in their local timing source. This drift can be reduced by using higher stability timing sources and by shortening the intervals between synchronization packets. Temperature-controlled crystal oscillators (TCXOs) and oven-controlled crystal oscillators (OCXOs) provide higher stability than standard crystal oscillators, and atomic clocks provide still higher stability. In addition to stability, a clock’s resolution will affect the accuracy of the timestamps transmitted in the PTP synchronization messages. Devices that have a higher resolution clock are able to more accurately timestamp messages. Also, variations in network delay, caused by jitter introduced by intermediate networking devices such as hubs and switches reduce the achievable synchronization level. IEEE 1588 provides an important alternative for systems requiring sub-microsecond synchronization in geographically distributed systems. Figure 2: Time Based Synchronization accuracy results 7. Time Sensitive Networking brings deterministic Ethernet Ethernet, Wi-Fi and other IEEE 802-based network technologies have been very successful in a large number of connectivity applications, but until very recently, there was no way to provide critical time sensitive services in those networks. The result has been a proliferation of specialized networks and connectivity systems for audio/video and real time control applications. This lack of integration is in the process of being remedied by the creation of an IEEE 802 architecture for "time sensitive networking." It specifies a profile for use of IEEE 1588 for time synchronization over a virtual bridged local area network defining how IEEE 802.3 (Ethernet) and IEEE 802.11 (Wi-Fi), can all be parts of the same timing domain. based on three major advances: 1) Universal time synchronization - or "time awareness" by the network infrastructure. Devices on the network, including switches and routers, can be synchronized very precisely via the IEEE 1588 and IEEE 802.1 "precision time protocol" standards. 2) Time sensitive queuing and forwarding in all devices to provide lower, and guaranteed, delays for time-sensitive data. 3) Bandwidth and latency reservations so that the time- sensitive queues in the network do not overflow and packets are not dropped. 8. CERN’s Timing Triggering and Control System uses optical networks to achieve sub-ns synchronization The LHC at CERN is one of the largest and most complex systems ever built. CERN is a complex of 6 circular and some linear accelerators which are interconnected. The biggest accelerator is the Large Hadron Collider (LHC) which is 27 km long. All the devices which serve the accelerators (magnets, kickers, etc) need to be precisely synchronized and controlled by a central control system. CERN takes advantage of the FPGA to serve as the timekeeper in the NI RIO platform and uses it to move blocks of graphite in place to absorb the protons that are not in the nominal path of the beam or, in other words, go astray. This process is commonly known as “collimation.” ETTC 2015– European Test & Telemetry Conference Since this is a 27 km tunnel, there are more than 100 of these collimators around the tunnel that have to be synchronized accurately and reliably. In a given collimator, the PXI chassis run LabVIEW Real- Time on the controller for reliability and LabVIEW FPGA on the reconfigurable I/O devices in the peripheral slots to perform the collimator control for approximately 600 stepper motors with millisecond synchronization over the 27 km of the LHC. The timekeeper on the field- programmable gate arrays (FPGAs) on these devices give the level of control needed. A decision was made to base the new CERN timing system on PTP. The project is an Ethernet-based network with low-latency, deterministic data delivery and network-wide, transparent, high-accuracy timing distribution. The White Rabbit Network (WRN) extension is based on existing standards, namely Ethernet, Synchronous Ethernet and PTP. The approach aims for a general purpose, fieldbus-like transmission system, which provides deterministic data and timing (sub-ns accuracy and ps jitter) to around 1000 stations. It automatically compensates for fiber lengths in the order of 10 km. Figure 3 : Enhanced Ethernet with White Rabbit Synchronization and Determinism The White Rabbit Project focuses on an open design consisting of: Sub-nanosecond accuracy - Synchronization of more than 1000 nodes via fiber and copper connections up to 10 km apart. Flexibility- creating a scalable and modular platform with redictability and Reliability -allow deterministic delivery obustness- no losses of high prioritized accelerator clock signals over the Ethernet physical yer. This signal can then be made traceable to an oving jitter generated from the lock recovery circuitry before being feed to the evices. The timing aster is syntonized with an atomic clock and is vices may be connected s well, only White Rabbit devices take part in the timing ake has been erformed. Therefore if a non-WR aware device is conn nd GPS. IEEE 1588 is designed for local systems simple configuration and low maintenance requirements. P of highest priority messages. R device control messages. White Rabbit takes advantage of the latest developments for improving timing over Ethernet, such as IEEE 1588 (Precision Time Protocol) and Synchronous Ethernet. Synchronous Ethernet, also referred as SyncE, is an ITU- T standard for computer networking that facilitates the transference of la external clock. Network synchronization in WR is based on clock hierarchy with the highest accuracy clock at the top. Slave clocks are PLL-based, locked to an external reference that is being recovered by the data link. The PLL cleans the recovered clock by rem c transmitting device. The key ideas of the White Rabbit (WR) technology can be adapted and included into the next revision of PTP consequently enabling the standard compliant PTP devices to achieve high accuracy synchronization using methods prototyped and tested in WR. In a White Rabbit timing network, a timing master will provide the master clock. The timing master provides the general timing to be used by all attached White Rabbit d m synchronized with a GPS receiver. Components of a White Rabbit network are White Rabbit Switches and White Rabbit nodes. Both components may be added dynamically to the network. Though conventional Gigabit Ethernet de a by using the timing information. The switch is the core element of the WR network, implementing the standard IEE802.1x Ethernet Bridge functionality and WR-specific extensions. The extensions are enabled only after a proper WR handsh p ected, it sees a standard 802.1x switch. 9. Updating standards provide improved data acquisition and control with Ethernet Upcoming changes to the Ethernet standards can yield a giant leap in performance from nanoseconds to picoseconds. IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP a requiring accuracies beyond those attainable using NTP. ETTC 2015– European Test & Telemetry Conference ETTC 2015– European Test & Telemetry Conference ject Authorization Request (PAR) for the revision f the IEEE 1588-2008 Standard was approved on 4-June- work implementations and defines uch mappings, including User Datagram Protocol lues are known. The grandmaster can be ynchronized to a source of time external to the system, if he impossible, possible for recision test and measurement applications. As they are d by the makers of equipment, it should make it easier for end-users. To fulfill the more demanding needs of test and measurement applications, IEEE 1588 (PTP) has been developed, which is able to provide sub-microsecond performance. Many of the research activities concerning IEEE 1588 have been targeted at Ethernet. Research at CERN shows that clock synchronization between the master node, which provides the UTC reference clock, and the slave nodes, which synchronizes to the reference clock, is possible in the ns-range. Currently the P1588 working group is working on a new edition of IEEE 1588. The Pro o 2013 with an expected completion date of 31-December 2017. The standard specifies requirements for mapping the protocol to specific net s (UDP)/Internet Protocol (IP versions 4 and 6), and layer-2 IEEE 802.3 Ethernet. The protocol enables heterogeneous systems that include clocks of various inherent precision, resolution, and stability to synchronize to a grandmaster clock. The protocol supports synchronization in the sub-microsecond range with minimal network bandwidth and local clock computing resources. The protocol enhances support for synchronization to better than 1 nanosecond. The protocol specifies how corrections for path asymmetry are made, if the asymmetry va s time traceable to international standards or other source of time is required. These enhancements make t p incorporate

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference PTPv1 vs PTPv2: Characteristics, differences and time synchronization performances. Guillermo Martínez Morán (guillermo.m.martinez@airbus.com) 1 Abstract: Precise Time Protocol (PTP) based on Ethernet network packets is displacing more and more the traditional time synchronization based on IRIG-B. This paper briefly describes the main characteristics of the two PTP existing versions and underlines the main differences between them. As PTP requires special expensive network equipments to achieve its maximum performance, some tests using standard network switches are performed in order to evaluate if performance achieved is enough for typical flight test applications. Keywords: Time synchronization, PTP Performance, FTI application. 1 Introduction When analyzing data coming from an aircraft, it is mandatory to have all of them dated in a coherent time reference, so all measurements captured under certain event have the same dated time. Very often the number or nature of the measurements requires the installation of many different hardware units. In order to coherently date all the data, time synchronization between the different hardware units becomes essential. Historically, synchronization has been based in a point-to- point schema where a single dedicated cable between two equipments transports a synchronization signal (typically an analog one). The introduction of Ethernet network schemas in instrumentation installations have driven the development of networked synchronization protocols. Networked schemas simplify the installations, allowing more flexible architectures and moreover, best performance can be easier achieved since high-frequency analog signals are not necessary. Currently, the most important networked protocol for precise applications is Precise Time Protocol (PTP), published under the standard IEEE 1558. Section 2 summarizes the legacy synchronization architecture based on the standard IRIG Time Codes and shows their performance. In Section 3 a network synchronization architecture advantages are addressed. Section 4 explains how PTP works. Main differences between the two existing versions IEEE 1588v2002 (PTPv1) and IEEE 1588v2008 (PTPv2) are also addressed. Section 5 presents the results for several PTP performance tests performed using different network switches. The aim of the tests is to evaluate if the synchronization performance achieve with standard network switches can be enough for typical flight test applications, so expensive special network equipment may be avoid. In Section 6 main conclusions reached after testing PTP technology are summarized. Results demonstrate that PTPv1 can be used without any special switching hardware obtaining similar accuracies to those achieved using IRIG-B. It is also shown that PTPv2 must be used with special switching hardware in order to obtain serviceable accuracies. PTP accuracies achieve are highly hardware manufacturer dependant. This fact is more relevant in PTPv2 technology, which seems to be less mature. As the probable survivor standard will be PTPv2, manufacturers must evolve the technology in the medium term. 2 Legacy synchronization technology The Inter-range Instrumentation Group (IRIG) is an USA group defining standards to be used by the instrumentation community. IRIG-200 standard [1] (last reviewed in 2004) addresses the harmonization of synchronization across test ranges by specifying a number of possible signals to be used. IRIG-200 is divided in code format, modulation, frequency and coded information [1, pag. 4-1]. 6 code formats, 3 types of modulation, 6 frequencies and 8 information combinations are possible. By combining these elements, different types of synchronization signals can be generated. The most used among Flight Test Instrumentation (FTI) community is the IRIG-B 120 signal. Therefore, it is a B format: 1 second cycle with 100 bits per second [1, pag. 6-6]. First 1 denote amplitude modulated; following 2 stands for 1 kHz sine wave carrier frequency; last 0 defines the information included: Time_of_Year, Control- Functions and Time Of Day expressed in seconds of the day in binary count. IRIG-B 120 accuracy is guaranteed only to be 1 millisecond (equal to 1 period of the 1kHz carrier signal). Nevertheless, by accurately measuring the carrier phase angle, it is possible to achieve an accuracy of tens of microseconds [2]. 1: Flight Test Means – Airbus Defence&Space, Avd. John Lennon s/n, 28906 Getafe (Spain) ETTC 2015– European Test & Telemetry Conference If better accuracy were required, an unmodulated signal should be used, since it can provide tens of nanoseconds accuracy. Nevertheless, sending this signal to several equipments is quite complex [2]. IRIG-B 120 was widely adopted by FTI community because, as a modulated audio signal, it could be spread much more easily [2]. Even so, impedance adaptation of the cable is crucial in order to get the maximum performance. This fact limits the scalability of an IRIG-B installation. It is common to see scalable-limited daisy-chain distribution schemas. Parallel schemas requiring specific hardware with limited number of outputs can be also found. Daisy Chain Synch IN Synch OUT Synch IN Synch OUT Synch OUT Synch IN … SynchIN Synch OUT #1 Synch OUT #2 Synch OUT #N Paralell Distribution Figure 1.- Legacy IRIG-B Architectures 3 Networked synchronization concept Nowadays, Ethernet based instrumentations in FTI are being used more and more. Three most important advantages are driving the change. In first place, the standard and widely adopted hardware available makes easy and cheap accessing the technology. Secondly, the high degree of scalability of an Ethernet star topology makes easy adding more hardware units. Lastly, the high concurrent data rate possible to exchange in a transparent way for the user. Moreover, by implementing different protocols over Ethernet, it is possible to exchange information for different purposes over the same cabling. Ethernet Switch #1 Ethernet Switch #N Ethernet Switch #2 Sources 1 2 K... Sources 1 2 L ... Sources 1 2 M... Figure 2.- Ethernet Star Topology Therefore, it is possible to reduce cabling complexity in FTI installations by using a synchronization protocol over Ethernet. In addition, using digital information rather than analog signals, drastically reduces performance impact of impedance mismatching and other noise sources. Besides, synchronization would take advantage of Ethernet scalability. Several Ethernet time synchronization protocols have been defined over the years like UNIX daemon timed and Digital Time Synchronization Service. Nevertheless, these protocols are day-time oriented (for IT purposes mainly) and had no clock discipline features. This fact makes its use worthless for instrumentation purposes. Network Time Protocol (NTP) [3] was the first including clock discipline. NTP typical accuracy values are tens of milliseconds. Nevertheless, under ideal Ethernet network conditions, 1 millisecond could be achieve [3]. Higher values are not possible due to the operating system stack latency [4]. 4 PTP explained in depth As NTP performance is not enough for many precision applications, such as FTI, a new Precise Time Protocol (PTP) must be used. In the very basic PTP is similar to NTP but implementing hardware time stamping in each Ethernet port. This eliminates the operating system stack latency, which is the main error contribution in NTP. PTP UDP IP Driver MAC Phy PTP UDP IP Driver MAC Phy NET HW Timestamping points Delay: < 50 ns Fluctuations: < 1 ns Asymmetry: <100 ns SW Timestamping points Delay: 0.1-3 µs Fluctuations: 0.1-3 µs Asymmetry: <3 ms Packets Exchange Master Clock Slave Clock Figure 3.- PTP on the network stack [5][6 pag, 55] 4.1 PTP Basic Operation As NTP, PTP is master-slave protocol based on packet exchange between both ends. Two are the processes that simultaneously takes place: syntonization and synchronization. Syntonization is the mechanism to make the slave clock running at the same speed as the master clock. It is achieve by using a continuous flow of Sync messages from master to slave. Master Clock Slave Clock tk 1 tk 2 tk+1 1 Sync(t k 1)Follow_up(t k 1) Sync(t k+1 1)Follow_up(t k+1 1) tk+1 2 Figure 4.- Syntonization process [5] Sync messages are dated with t1 when leaving the master clock and with t2 when entering the slave clock. The slave clock must adjust its speed until both intervals are equal. ETTC 2015– European Test & Telemetry Conference Master clock time t1 can be sent to slave clock using two different methods. One-step clocks include t1 in the Sync message, while two-steps clocks send t1 in a later message called Follow_up message. Two-steps clocks are easier to design as the heavy duty is in the software, while one-step ones have a more complex hardware part because they must be able to timestamp the Sync message on the fly [5][7]. Sync messages are sent periodically, at a rate of some messages per second. Continuous control is mandatory as oscillators are susceptible to environmental changes. Synchronization is the mechanism that determines the slave’s offset from the master (i.e. difference in seconds between both clocks). This is done by measuring the round-trip time of the packets. Under the assumption of a symmetrical transmission path for Sync and Delay_Req messages, offset and delay are obtained as follow [6, pag. 50-53]. In order to maintain the slave clock synchronize along the time, a continuous correction is mandatory. For this purpose, values obtained from synchronization process are used to feed a Proportional-Integration (PI) control loop [6, pag. 146]. Master Clock Slave Clock t1 t2 Sync(t1)Follow_up(t1) Apparently same time t3 t4 Delay_Req O=Offset D=Delay D O Delay_Resp(t4) t1 t2 t3 t4 Figure 5.- Synchronization process [5] 4.2 Introducing PTP in a network As stated in previous section, PTP works reasonably well when the condition of symmetrical transmission path is comply. If both ends were directly connected through a cable, this assumption would be complied. Nevertheless, in an Ethernet network there is normally a switch in the middle with variable queues introducing jitter in transmission times. To overcome this issue, PTPv1 introduced the concept of Boundary Clock (BC) (see Figure 6). Section 5.3 and Section 5.4 will address the synchronization accuracy impact when using a regular switch. Several boundary clocks may be cascaded, the same way regular switches are. This cascade schema makes errors introduced in each step accumulative downwards. PTP UDP IP Driver MAC Phy PTP UDP IP Driver MAC Phy NET Master Clock Slave Clock PTP UDP IP Driver MAC Phy Master Clock PTP UDP IP Driver MAC Phy Slave Clock NET Sync Boundary Clock (Special Switch) Figure 6.- Boundary Clock Schema Therefore, when many steps are introduced synchronization performance is affected. Although it is not a big issue in an FTI installation (typically consisting of one switch or two cascaded at most) it is a huge issue for telecommunications or energy applications. For this reason new switch types were introduced in PTPv2 definition. 4.3 PTPv2 improvements over PTPv1 [5] This section contains not all but the most relevant improvements introduced in PTPv2. None of them are important for FTI instrumentation but, as PTPv2 will be the probable surviving standard, it is important to know the main differences with PTPv1. 4.3.1 Higher Message Rate Time synchronization for mobile 4G technology requires frequent state actualization. For this reason PTPv2 includes the possibility to send up to 128 sync messages per second, instead of 1 packet/second in PTPv1 4.3.2 Shorter Sync Message PTPv1 Sync message include information related to master clock in order to allow slaves to select de Best Master Clock. In PTPv2 this feature is performed separately, as it is not necessary to update this information 128 times per second. Sync message is then reduced to 44 bytes from the original 128 bytes in PTPv1. 4.3.3 Timestamp resolution PTPv1 maximum representation in the messages is 1ns, while in PTPv2 it has been improved to 2-16 ns. Therefore a sub-nanosecond accuracy is somehow foreseen. In addition, in order to solve the accumulative errors introduced by Boundary Clocks, new switches types are introduced (see Section 4.4) 4.4 Transparent clocks PTPv2 defines new types of switches in order to avoid the accumulative error schema introduced by Boundary Clocks when cascaded. A Transparent Clock (TC) is an Ethernet switch capable to measure the time a PTP message spends in the switch during its transit. This time is called residence time. TCs ETTC 2015– European Test & Telemetry Conference are able to introduce the value of the residence time in the correction field of Sync message in a one-step schema, or in the Follow_up message in a two-steps one. Two types of TCs are defined. Master Clock Slave Clock t1 Sync(t1,c) TC1 Sync(t1,c) c = cinitial TC2 Δs1 Δs2 c = c+Δs1 Sync(t1,c) c = c+Δs2 t2 Timestamping point Δs Residence time c Correction Field Figure 7.- Sync message through two E2E TCs 4.4.1 End-To-End (E2E) E2E TCs allow the slave to know the residence time accumulation in the exchanged packets by adding the residence time to the correction field in each packet going through the TC (see Figure 7). Same schema can be applied to the Delay_Request packet. As this packet is sent to the master clock by each slave clock, the end master clock knows latency time to each slave on the network (see Figure 9). Master Clock Slave Clock t1 Sync(t1,c) TC1 Sync(t1,c) c = cinitial TC2 Δs1 Δs2 c = c+Δs1+ΔL1 Sync(t1,c) c = c+Δs2+ΔL2 t2 Timestamping point Δs Residence time c Correction Field ΔL1 ΔL2 ΔL3 ΔLUplink delay c = c+ΔL3 Figure 8.- Sync message through two P2P TCs 4.4.2 Peer-to-Peer (P2P) P2P TCs measure the link delay to all neighbouring clocks (either slave clocks or other TCs). To measure links delay, PTPv2 introduces a new type of message called Pdelay_Req / Pdelay_Resp. When a Sync message traverses a P2P TC, the correction field is updated both with residence time and the link delay previously measured (see Figure 8Figure ). P2P schema does not overload master clock with information coming from all the nodes in the network, as represented in Figure 9. Additionally, notice link delays are even measured in connections where standard traffic is blocked by i.e. Spanning Tree Protocol. 4.5 PTPv1 and PTPv2 compatibility Because message format is different in PTPv2 from PTPv1, it is not possible for clocks of different standards to synchronize between them. Nevertheless, there are on the market several switches able to use PTPv1 in one port and PTPv2 in another one. Therefore, islands of different PTP versions can be used. Figure 9.- E2E (left) and P2P (right) network schema It is however expected that PTPv2 will be the dominant protocol in the future. For this reason, although FTI installations do not take any advantage from PTPv2, it worth the time knowing how PTPv2 works. 5 PTP performance tests 5.1 Metodology and means Differences between master and slave clocks can be measured by simultaneously monitoring the Pulse Per Second (PPS) signals coming out from them. When synchronized, the difference between both signals tends to be 0 seconds. Master Clock PPS PPS Switch Slave Clock Oscilloscope ETH ETH Figure 10.- Test RIG schema Table I shows the role distribution of the four equipments available to perform the tests. Each equipment has a different manufacturer and all of them are FTI focused. IRIG-B PTPv1 PTPv2 Master A A D Slave B B/C B/C Table I.- Test Means available 5.2 Reference measurements To be taken as reference, synchronization performance without switches between master and slave has been measured. Figure 11 shows typical accuracy achieves using IRIG-B. Master Clock A Slave Clock B µ=40 µs σ=2 µs IRIG-B Figure 11.- IRIG-B Performance Comparing Figure 12 and Figure 11Figure , it is possible to appreciate the improvement in one order of magnitude when using PTPv1 against IRIG-B. In Figure 12, the importance of actual implementation of the protocol can ETTC 2015– European Test & Telemetry Conference be seen. Different equipments from different manufacturers provide quiet different synchronization performance. Master Clock A Slave Clock B µ=300 ns σ=150 ns PTPv1 Master Clock A Slave Clock C PTPv1 µ=20 ns σ=20 ns Figure 12.- PTPv1 Performance As clearly represented in Figure 13, PTPv2 performance is worst than PTPv1 in both tested hardware. Hardware B PTPv2 synchronization is even worst than IRIG-B. Master Clock B Slave Clock C Master Clock B Slave Clock B PTPv2 PTPv2 µ=200 ns σ=25 ns µ=300 µs σ=120 µs Figure 13.- PTPv2 Performance 5.3 PTPv1 performance using regular switch Nowadays, Ethernet port cost for regular switches is 50- 100 times lower compared to FTI PTP compliant switches. Regular switches does not assure the symmetrical transmission path delay require in PTP (see Section 4). However, as PTP performance exceeds FTI requirements, synchronization accuracy achieve with regular switches could be enough for FTI purposes. Table II shows how PTPv1 Boundary Clocks do well their job and synchronization performance is not affected by traffic level in the switch. On the other side, when using a regular switch, performances are 10 times worse. Moreover, as expected, it is affected by the amount of traffic managed by the switch. Nevertheless, with 33 Mbps of traffic (corresponding to a big instrumentation), performance is similar to the one obtained when using IRIG-B. Therefore, using PTP with regular switches is possible in most FTI applications, getting a similar or better accuracy (depending on traffic amount) to the one obtained with IRIG-B. MasterClockA SlaveClockC Network Traffic 0 Mbps 6 Mbps 33 Mbps Regular Switch Mean µ 0 ns 0 ns 0 ns Std σ 500 ns 500 ns 5000 ns Boundary Clock Mean µ 0 ns 0 ns 0 n Std σ 50 ns 50 ns 50 ns Table II.- PTPv1 performance with different types of switch 5.4 PTPv2 performance using regular switch Unlike PTPv1, PTPv2 performance is severely affected when using a regular switch to interconnect master and slave clocks. Figure 14 shows how, even with no additional traffic in the switch, synchronization accuracy is worst than with PTPv1. Mean desviation is 25μs for hardware C and 120μs for hardware B. Again big differences in performance between different manufacturers can be found. When adding moderate level of Ethernet traffic to the system, performance decline. Slave clock B stability is so affected that it cannot be used for FTI purposes. Slave clock C, although working better, presents worst stability than the one achieve using IRIG-B. Thus it is not possible to get an usable synchronization using PTPv2 with regular switches. A transparent clock is mandatory to use PTPv2 in instrumentation arquitectures. Master Clock D Slave Clock B Master Clock D Slave Clock B Slave Clock C Slave Clock C PTPv2 No Traffic PTPv2 6 Mbps Figure 14.- PTPv2 performance using regular switch 6 Conclusions In order to get from PTP the best performance, it is mandatory to use expensive special switches, which assure a symmetric transmission path between master and slave clocks. Using cheaper regular switches with variable latencies in their queues provides worst PTP performance. Nevertheless, it has been demonstrated that PTPv1 accuracy is as good, or even better, as the one provided by the legacy IRIG-B synchronization. Performance is directly link to the amount of traffic in the switch. However, it is acceptable even at rates corresponding to big FTI installations. Therefore, PTPv1 can be used through regular switches for most FTI installations. In contrast, PTPv2 performance when using regular switches is not useful for FTI installations. PTPv2 performance is near to IRIG-B when there is no additional traffic in the switches, but degrades severely under moderate traffic conditions. It is then mandatory to use transparent clocks if using PTPv2 as synchronization protocol. The main aim of the tests was not to compare synchronization performance of different hardware manufacturers. Though, big differences in performance depending on the hardware manufacturer have been found. Differences are especially big when testing PTPv2, which indicates a low mature level in this technology. As the probable future standard is the PTPv2, and it is not compatible with PTPv1, FTI manufacturers must evolve the technology in the medium term. ETTC 2015– European Test & Telemetry Conference 7 References [1] Range Commanders Council, Telecommunications and timing group. “IRIG Serial Time Code Formats”, 2004. Accesed: April 2015. http://www.wsmr.army.mil/RCCsite/Documents/200- 04_IRIG%20Serial%20Time%20Code%20Formats/200- 04_IRIG%20Serial%20Time%20Code%20Formats.pdf [2] B. Dickerson, “IRIG-B Time Code Accuracy and Connection Requirements with comments on IED and system design considerations”, (No date) Accesed: April 2015. http://www.arbiter.com/files/product- attachments/irig_accuracy_and_connection_requirem ents.pdf [3] From Wikipedia, the free encyclopedia, “Network Time Protocol”. (No date) . Accesed: April 2015. http://en.wikipedia.org/wiki/Network_Time_Protocol [4] B. Dickerson, “Precision Timing in Power Industry, How and Why we use it”. (No date). Accesed: April 2015. http://arbiter.com/news/technology.php?id=4 [5] H. Weibel, “The Second Edition of the high Precision Clock Synchronization Protocol”, 2009. Accesed: April 2015. http://ines.zhaw.ch/fileadmin/user_upload/engineering /_Institute_und_Zentren/INES/Downloads/Technology _Update_IEEE1588_v2.pdf [6] J.C. Edison, “Measurement, Control and Communications using IEEE1588”, 2006. Ed. Springer. ISBN-10: 1-84628-250-0. [7] D. Arnold, “One-Step or Two-step?”, 2013. News and Tutorials from Meinberg. Accesed: April 2015. http://blog.meinbergglobal.com/2013/10/28/one-step- two-step/ 7. Acronyms PTP Precise Time Protocol IRIG Inter-range Instrumentation Group FTI Flight Test Instrumentation NTP Network Time Protocol PI Proportional-Integration Control Loop BC Boundary Clock TC Transparent Clock PPS Pulse per Second

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

1 User Programmable FPGA I/O for Real-Time Systems – Combining User Friendliness, Performance, and Flexibility Andreas Himmler1 dSPACE GmbH, Paderborn, 33102, Germany Jürgen Klahold2 dSPACE GmbH, Paderborn, 33102, Germany Field Programmable Gate Array (FPGA) technology has already proven its benefits for a wide range of applications requiring I/O interfaces with highly parallel computation power, short latencies in combination with with very fast, high-resolution signal processing. These benefits are for instance highly appreciated to interfaces real-time systems facing electric drives. Typical real-time systems are hardware-in-the-loop simulators and rapid-control- prototyping electronic control units for electric drives. This paper presents a workflow and an associated toolchain for a real-time technology that fulfills these requirements. The workflow gives users a framework to easily interface their own FPGA code in an FPGA I/O board with the physical I/O on the one hand and the real-time code running in the CPU of the real-time system (typically based on Matlab/Simulink). This framework is readily prepared but nonetheless flexible. The physical I/O of this I/O board is modular, in order to select from a range of off-the-shelf I/O modules complementing the FPGA base board or to build very specific and project dependent I/O modules. An example is given about how this toolchain, I/O boards and real-time system are applied to real-world problems. Nomenclature ECU = electronic control unit FPGA = field-programmable gate arry HIL = hardware-in-the-loop I/O = input/output LVDT = linear variable differential transformer MEA = more electric aircraft PSM = permanent magnet sychonous motor PWM = pulse width modulation RCP = rapid control prototyping RTI = real-time interface VHDL = very high speed integrated circuit hardware description language XSG = Xilinx® system generator 1 Business Development Manager Aerospace, Product Management, dSPACE GmbH, Rathenaustraße 26, 33102 Paderborn, Germany. 2 Product Manager Hardware in the Loop Testing Systems, Product Management, dSPACE GmbH, Rathenaustraße 26, 33102 Paderborn, Germany. 2 I. Introduction IELD Field Programmable Gate Array (FPGA) technology has already proven its benefits for a wide range of applications requiring I/O interfaces with highly parallel computation power, short latencies in combination with with very fast, high-resolution signal processing. These benefits are for instance highly appreciated to interfaces real-time systems facing electric drives. Typical real-time systems are hardware-in-the-loop simulators and rapid- control-prototyping electronic control units for electric drives. In addition, FPGAs give users extreme flexibility when they need to implement specific sensor or actor interfaces (e.g. synchro, resolver, or audio interfaces) or bus interfaces. Users are well aware of these benefits. Nevertheless, they regard the usage of FPGA-based interfaces is complicated because they may need to program an FPGA by themselves and they need to interface the FPGA with the real-time CPU running their algorithms (e.g. simulation or analysis algorithm). Thus, there is a need to build FPGA- based I/O for real-time systems that adds an efficient, intuitive and simple to use workflow to the technology-inherent benefits of FPGAs. This paper will present a workflow and an associated toolchain for a real-time technology that fulfills these requirements. The workflow gives users a framework to easily interface their own FPGA code in an FPGA I/O board with the physical I/O on the one hand and the real-time code running in the CPU of the real-time system (typically based on Matlab/Simulink). This framework is readily prepared but nonetheless flexible. The physical I/O of this I/O board is modular, in order to select from a range of off-the-shelf I/O modules complementing the FPGA base board or to build very specific and project dependent I/O modules. Examples will be given about how this toolchain, I/O boards and real-time system are applied to real-world problems. The first two sections give an overview about real-time systems that require the application of FPGAs for closed-loop operation with very short cycle times. The first kind of real-time system are Rapid- Control Prototyping (RCP) Systems and the second is a Hardware-in-the-Loop (HIL) simulation system. The third section decribes in more detail why there is a need to apply FPGAs in RCP and HIL systems. Section four describes a user-friendly workflow to use FPGAs in a real-time F Figure 1. Rapid Control Prototyping systems: compact, ruggedizd system, modular systems, and a modular, ruggedized signal conditioning system. Figure 2. General RCP development process. 3 system most efficiently and section five contains an application example how FPGAs are used in a HIL system to test electric drives. II. Rapid-Control Prototyping Systems for Model-Based Development Many industries are under pressure to reduce their development times as they produce unique and innovative products. These two factors are indispensable to success in a globalized market, especially for thigh-tech industries such as automotive, aerospace and communication, where electronic controls are a vital part of each new product. Model-based control design is a time-saving, cost-effective approach, because control engineers work with just a single model of a function or a complete system in an integrated software environment. This model-based development process results in an optimized and fully tested system, with no risk that individual components do not fit together optimally. To model controller strategies and the internal behavior of software components, tools such as MATLAB® /Simulink® /Stateflow® from Mathworks and TargetLink® from dSPACE can be used. If a new ECU or a new set of control functions has to be developed from scratch, quick trials have to be run at an early stage to verify the correctness of the control strategy. Tests in the real environment (e.g. vehicle, plane) or on a test bench therefore have to be carried out even before the new ECU hardware becomes available. Producing an application-specific prototype ECU for this purpose, e.g., by modifying a production ECU, would be expensive, time- consuming and inflexible. Instead, developers can use a powerful off-the-shelf rapid control prototyping (RCP) system which acts as an experimental ECU, but which has many advantages compared to other solutions. User requirements on RCP systems are very diverse. Some applications require compact, ruggedized RCP systems for in- vehicle applications while other RCP systems are to be used in the lab. Some users require systems that offer optimum scalability and flexibility while others require compact, all-in-one systems with a common set of well-know I/O interfaces. Examples of such systems are shown in Figure 1 and Figure 3. Such best-in-class RCP systems for model-based development fulfill two requirements: (1) The have high computation power combined with very low I/O latencies in order to provide great real-time performance and (2) they have a perfect Simulink integration to allow faster design iterations and to reduce the overall development time. III. Real-Time Systems for Hardware-in-the-Loop Simulation Hardware-in-the-loop (HIL) simulation is an integral and reliable part of the development process. Hardware-in- the-loop simulation is used for testing ECU functions, for system integration, and for testing ECU communication. The environment of the ECUs to be tested is simulated in real time (Figure 4). The environment can consist of Figure 3. Rapid Control Prototyping system MicroLabBox. Figure 4. Hardware-in-the-loop simulation. 4 interacting system components such as sensors and actuators, other subsystems or complete systems, and the aircraft or vehicle environment. The main advantages of HIL tests are reproducibility, systematic and automated testing also outside of safe system states, and the traceability of problems observed in the field. This makes it possible to conduct tests efficiently (time, costs) and as early as possible in the development process. The trend to test with virtual (i.e., simulated) ECUs that are later replaced with real ECUs highlights the importance of early testing (Ref. 2). A typical HIL system comprises the simulation hardware, such as:  Processor unit for computing the simulation models  Battery simulation as the power supply for the simulation system  I/O interfaces  Other auxiliary components such as load boards or failure simulation Connected to it is the unit under test, which usually is one or more electronic control units (ECUs) containing new functions or ECU software to be tested. The software for configuring and automating the HIL test runs on a PC, as well as the software for parameterizing the simulation model and visualizing the simulation run. Test data management software can also be used. In the following subsection, the HIL technology SCALEXIO [2] is used in order to discuss topics related to HIL simulation. SCALEXIO uses the I/O network IOCNET, which is used for high bandwidths and low latencies and supports the required model synchronization on the real-time processor and FPGAs applied on I/O boards. The configuration of a SCALEXIO HIL system is done by using the dedicated tool ConfigurationDesk® (ref.). The configuration process is roughly divided into three tasks: describing the externally connected devices (control units, real loads, etc.), selecting the I/O functions for each signal, and linking the I/O functions to the plant model. IV. FPGAs for Real-Time Systems Rapid Controls Prototyping (RCP) and Hardware-In-the-Loop (HIL) technologies requiring very fast computations, in particular when electric drives are controlled or simulated. In the prototyping phase of a new electronic control unit (ECU) it might be necessary to use new sensors and therefore to implement new protocols or new control strategies which require a more precise control of the power stages. The implementation of new protocols or controls with direct effect on electric components are not feasible on processor and require a direct hardware implementation, which is only possible on ASICs or FPGAs. As ASICs are not also feasible for prototyping new functions, FPGAs are used. The simulation of electric drives require the simultaneous computation of complex simulation models and highly precise measurement of the electronic control unit (ECU) signals. Generally, time-critical I/O Figure 5. FPGA Interface Library. Figure 6. Signal chain on the HIL system SCALEXIO. 5 computations are described by an FPGA model. Even parts of the plant model are frequently executed on the FPGA to meet the needs of a modern ECU. dSPACE's SCALEXIO provides a convenient solution for both these cases and also for mixed scenarios. If mean value models are used on the processor side, the output signal is often updated only once per ECU control step size (typically 50 µs). FPGA-based model computation offers decisive advantages for highest requirements on dynamics and accuracy. FPGAs reach very high sampling rates so that output signals are calculated and updated considerably more often than once per ECU sampling cycle. The result is an appreciably higher quality of simulation. For example, high-frequency simulation makes it possible to simulate the inductance current ripple caused by pulse width modulation (PWM) control, improve the precision with which higher frequencies are simulated, and ensure high control loop stability. The measurable latency between the hardware input and hardware output is typically reduced from 50 µs to about 1 µs in comparison to processor-based models. The simulated current values are output every 100 ns. Nevertheless, it should be mentioned that the tool chain for FPGAs consumes a lot of time and normally requires specialized hardware developers. Even if the usability can improved to a level that software developers can also design functions for FPGAs, it is still more time consuming effort necessary. Therefore, there is the intention to keep the function parts, which should be executed on an FPGA as stable as possible avoiding to run through the time consuming process too often. V. User-friendly workflow for Model-Based-Design Commonly FPGAs are only used from hardware developers on I/O boards with a fixed set of implemented functions. So special tool chains and special languages (e.g. VHDL) can be used to program the FPGA. With new requirements on turnaround times (below one µs) for models or new interfaces, the inherent given flexibility of FPGAs has be used from function developers to satisfy the shortened development times. Using VHDL in this use case is inconvenient, as the developer would have to learn a new language and development chain, which is completely different from his ordinary work. Therefore, a method to program an FPGA is required, which meets the known workflow of a function developer. Known from the modelling for processors, the most convenient way to configure an FPGA is a graphical method. The established tooling for processors is Simulink® , so it suggests itself to use Simulink as well for the programming of the FPGA. The Xilinx® System Generator (XSG) is a Simulink blockset for configuring Xilinx® FPGAs. It contains simple logic elements as well as complex blocks such as Fourier transforms and FIR filters. With this blockset or additional libraries based on the XSG, it is possible to implement the desired function on the FPGA. However, the model has still to be connected to the environment of the FPGA. This is on the one side the I/O connected to the FPGA in order to measure or generate Figure 7. XSG Utils Library. Figure 8. FPGA-based simulation – illustrating the data transfer from the processor model (top window) to the FPGA model (bottom window). 6 signals and on the other side the real-time processor in order to exchange data. To evaluate the desired behavior an offline simulation within Simulink is very useful, as the build process for an FPGA application requires a long time. It is an advantage if the interface blocks directly support the offline simulation, as no variant from the model is required and some effects like quantization or value ranges can be directly taken into account. A concurrent simulation of the model for the real-time processor stimulates the model for the FPGA with proper values and also verifies the interaction of both model parts. In addition, a behavior verification is much easier within Simulink especially together with other finally used model parts, as the common method from hardware developers writing special testbenches within VHDL. After the offline verification of the behavior, a build of the application is required. As a function developer is unfamiliar with the special tooling for FPGAs, a push button solution is preferred. Therefore, it would be optimal if the model for the FPGA could directly transferred in an I/O function for the processor model. This allows the function developer as well as his colleagues to use the FPGA application with in their common software development environment, also for further projects as well as for several FPGA boards. dSPACE provides both the hardware (e.g. MicroLabBox or DS2655 FPGA Base Board) and the software (RTI FPGA Programming Blockset) to connect the XSG models to the FPGA’s interfaces. The FPGA Programming Blockset provides blocks for implementing the interface between the FPGA mounted on a dSPACE board and its I/O, and the interface between the dSPACE FPGA board and real-time PC of the system. On the I/O side the blockset allows a simple configuration of the basic features of the I/O, e.g. the electrical configuration of a digital output driver (push/pull, high-side voltage 3,3V or 5V etc.). Further functions like the measurement of the duty cycle of a PWM signal can easily be implemented by connecting a PWM measurement block from the utils library to the I/O block. In this matter, it is also possible to add more complex function blocks like resolver simulation or the control of power stages. This allows a comfortable handling of I/O functionalities known form RTI for the standard modelling for processors with the new possibility to adapt the I/O function if necessary, enabling the implementation of new interfaces for example. The I/O functions can be extended with controllers or models (e.g. based on the XSG Electric Components Library) to close loops with turnaround times far below 1 µs. Slower parts can be implemented on the processor. Therefore, also a convenient way to exchange data between the FPGA and the processor is given. The user can choose between a buffer communication to transfer a burst of data or register based communication to transfer single values also from different levels of the FPGA related model. Here it is helpful to group the registers to logical groups so that the Figure 9: FPGA-based simulation at signal level. Figure 10: Simulation of the current ripples of a permanent magnet synchronous motor. 7 consistent sets of data are transferred. The framework directly supports this. Considering the complete workflow, a compfortable toolchain is given. It is fully Simulink based and hides FPGA specific tools. Function devlopers can stay in their common environment, model the desired behavior in their familiar way and verify it, including quantisation and fixed-point effects. The result is an FPGA application which can directly be used in the known model based design flow for processors. VI. Application Example After the functions of the electronic control units (ECU) associated with an electric motor have been developed and implemented on the production ECU, they have to be tested thoroughly. With HIL simulation, it is easy to cover all the different motor varieties and their ECUs. A combination of fast model computations and low I/O latencies is indispensable for simulating highly dynamic controlled system in electric drives engineering, and is also a typical application area for SCALEXIO. With the connection to the XSG Electric Components Library, the following challenges can be handled: The realistic representation of current behavior required for developing analog current controllers runs with a sampling rate considerably higher than once per PWM period. Simulating electrical circuit with frequencies higher than 1 kHz, processor-based simulations exceed their limits. Using FPGA technology increases the range several times over. The enormous FPGA sampling rate makes PWM synchronization unnecessary, so now systems with a variable PWM switch frequency can also be simulated realistically. Highly dynamic applications such as DC/DC converters require higher PWM frequencies. These frequencies are higher than 20 kHz, and the only way to represent the current and voltage realistically is by FPGA-based simulation. When an electric motor is simulated at power level, voltage and current values must be represented as realistically as possible. This is necessary if these reference values are to be used as input to the electronic load. Here too, fast computation on the FPGA is absolutely essential. Figure 10 shows an example for the simulation of current ripples: the current behavior in the stator of a permanent magnet synchronous motor with a torque of 6250 rpm; the PWM frequency is 32 kHz. References 1 Himmler, A., “Modular, Scalable Hardware-in-the-loop Systems,” ATZelektronik worldwide, Vol. 5, No. 2,2012, pp. 36-39. 2 Himmler, A., “Hardware-in-the-Loop Technology Enabling Flexible Testing Processes”, 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, DOI 10.2514/6.2013-816, eISBN 978-1-62410-181-6, Grapevine (Dallas/Ft. Worth Region), Texas, 2013. 3 Himmler, A., Allen, J., and Moudgal, V., “Flexible Avionics Testing - From Virtual ECU Testing to HIL Testing”, SAE 2013 AeroTech Congress & Exhibition, Montreal, Canada, 2013, SAE Technical Paper 2013-01-2242, 2013, doi:10.4271/2013- 01-2242, http://papers.sae.org/2013-01-2242/ [cited December 12, 2013]. 4 Himmler, A. “Openness Requirements for Next Generation Hardware-in-the-Loop Testing Systems”, AIAA Modeling and Simulation Technologies Conference, doi:10.2514/6.2014-0636, National Harbor, Maryland, 2014. 5 Schütte, H., Wältermann, P., “Hardware-in-the-Loop Testing of Vehicle Dynamics Controllers – A Technical Survey, SAETechnical Paper [online database], Paper 2005-01-1660, URL: http://papers.sae.org/2005-01-1660 [cited December 13, 2012].

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

N° 9 – Guaranteed end-to-end latency through Ethernet - Øyvind Holmeide and Markus Schmitz - OnTime Networks - Norway, United States. Latency sensitive data in a Flight Test Instrumentation (FTI) system represents a challenging network requirement. Data meant for the telemetry link sent through an on-board Ethernet network might be sensitive for high network latency. Worst case latency through the on-board Ethernet network for such data, might be as low as a few hundred microseconds. This challenge is solved by utilizing the Quality of Service (QoS) properties on the Ethernet FTI switches. This paper describes how to use Ethernet layer 1, layer 2 or layer 3 QoS principles of a modern Ethernet FTI network

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015 – European Test & Telemetry Conference Lessons for Onboard Data Storage from Electronic Data Processing Environments and Airborne Video Systems Malcolm Weir Ampex Data Systems Corporation, Redwood City, CA, USA Abstract: When compared to the changes seen in commercial data processing and digital video over past 3 decades or so, onboard data storage techniques have evolved surprisingly little. This paper explores some of the implications on data storage devices used for test instrumentation acquisition and the methods employed to format and store that data. Keywords: Metadata, File Formats, Database 1. Introduction In the early 1980’s, the commercial Electronic Data Processing (EDP) marketplace was starting a gradual transition from off-line batch processing to what was to become known as On-Line Transaction Processing (OLTP). While the operational environment in which the Test and Evaluation (T&E) community has to function is very different from that of EDP shops, many of the goals underlying the data collection are similar, or in some way related, to that of the more mundane commercial industry. There are also, of course, some very significant differences. More recently, the techniques used to manage airborne video acquisition have also evolved. Again, there are similarities between video acquisition and T&E applications. 2. Current Onboard Data Storage 2.1. Legacy Use Cases In the context of this paper, onboard data storage primarily concerns data acquisition operations. In the abstract, acquisition can be characterized as being dominated by write operations, where data is aggregated and stored. Historically, this was the totality of the exercise. Real-time onboard processing (or the absence of same), while not obviously directly associated with onboard storage, is generally a significant part of (or absence from) any data acquisition system. And likewise the data links away from the test article are obviously part of the system, but the data flow is simple and unidirectional. As the diagram shows, the data store was originally tape based, and in terms of logical structure, in many cases this has evolved little beyond changes in the physical media. Increasingly, though, there are demands for retrieval of the acquired data before the acquisition is “complete”. This is coupled with far more complex data flows, with bidirectional data flows, and with a desire for multiple non-overlapping data stores (e.g., crash survivable storage), which typically hold a subset of the data. This all leads to something like the data flow diagram resembling the one on shown in Figure 2. 2.2. Data Storage Formatting As indicated earlier, even though tape recording data storage systems have largely been replaced by ones employing solid state devices, most of which emulate magnetic disk drives, the data is still written in a form that is completely compatible with sequential-access devices like tape drives. In fact, the dominant standard in the field, the “IRIG 106 Chapter 10 Solid State On-Board Recorder Standard”, also known as “IRIG 106 Chapter 10 Digital Recording Standard” (the exact title depending on the version), not only mandates that data be stored in a sequential access “flat file”, but also that the file must be logically contiguous for each recording “session”. While this was a reasonable restriction when the data set for a given recording session ranged in size from a few hundred megabytes up to a few gigabytes, it becomes dramatically unworkable for datasets sized in terabytes. More recent standards (both published and proprietary) that cover onboard acquisition, such as those from the “integrated Network Enhanced Telemetry” (iNET) effort avoid specifying how data items should be stored, but define access methods; the methods are typically fairly elementary, and thus could be implemented by searching and reading a flat, Chapter 10-style file and then filtering out the unwanted data – this sort of approach is in fact the Onboard Processing Data Sources TransmitterTransmitter Data Storage Onboard Processing Control RoomControl Room Data Sources TransmitterTransmitter Data Store Secondary Store Figure 1: Legacy Data System Figure 2: Evolved Data System ETTC 2015 – European Test & Telemetry Conference preferred technique for the iNET goal of filling in “drop outs” on an otherwise unreliable datalink. 2.3. File Format Taking the Chapter 10 file format to be representative of a typical “flat file” recording, the file structure can be generalized as:  A set of variable length records, termed “packets” in Chapter 10.  The records are stored ordered by a counter with a tick interval of 100 nanoseconds.  While the records appear similar to a time series, in practice they are not truly ordered by chronological time (because the payload in a given record may have been delayed at various points prior to writing to the file, and the payload in adjacent records may have not been delayed). However, for a given source, the records will constitute a time series.  The records have “weak typing”: while the definition of a given record is defined by the standard, there is no mechanism to verify the type of data mandated is actually present. For example, compressed video may be stored as some Chapter 10 flags followed by a series of MPEG-2 Transport Packets; as far as the file format is concerned, the Transport Packets can be invalid – for example, the mandatory sync byte may be incorrect – but that is permissible for the Chapter 10 file precisely because there may be value in collecting erroneous data.  Related to the concept of “weak typing” is the observation that generic container formats (“Message Data Packets”) can be used to hold arbitrary data, and such containers are, by definition, opaque from the perspective of the controlling standard: to interpret the Message Data container, you need additional information, and there exists no good method for providing that information.  There are three special record locations, two of which are mandatory and one optional. The first two records in the file must be a “setup record” and a “time packet”, and if “Recording Event indexing” is used, the final record must be the “Root Index”.  Metadata may be placed in the “setup record”, but the absence of metadata does not invalidate the file. In general, the metadata is narrative (“strain gauge on left wing”) rather than descriptive from a data modelling perspective (“integer in the range -127 to +128”). An excellent example of this distinction lies in recording GPS data: many GPS sources provide information in text strings (known as “sentences” by the applicable standard), but the data may more usefully be stored as numbers; so the metadata applicable to that record may indicate that certain records contain GPS time, but not how that time is stored1 . 2.4. Metadata As indicated above, current industry standard practice tends to focus the term “metadata” towards the sources of the data; Chapter 10 defines that the IRIG 106 Chapter 9 standard (“TMATS”) be used to describe the data sources. While this is essential for the interpretation of the information carried by that data, it overlooks the lowest- level metadata that describes how it is stored on board, and indeed how it is handled once it is transferred off the aircraft. Obviously, a schema can be developed for just about any organized dataset, but significantly the schema for typical legacy onboard recording files is only loosely defined. 2.5. Emerging Use Cases As on board systems and the related storage evolve, two factors stress the legacy way of saving data: first is the sheer volume of data being managed, and the second is the sort of environment illustrated in Figure 2 where data retrieval requests are proliferating. It is clearly desirable for the data store to be able to search previously saved records efficiently so that real-time data mining can provide exception-based answers, because the volume prohibits exhaustive scanning of the entire dataset. It is also clear that the handling metadata, both of the narrative and data modelling forms discussed earlier, needs to be incorporated into the design of large systems. There is also yet another layer of metadata that has become (at least) desirable and, in some cases, mandatory: the configuration information. The goal of this metadata is to preserve sufficient information so as to be able to configure the data acquisition system in the same state. Significantly, although these issues have been described for the on board storage system, they are as significant, if not more important, for the processing and archival segment “on the ground”. The concerns also apply to the process of moving data from the on board system to the processing system: it is frustratingly common to require the entire data set to be transferred off the vehicle before analysis can begin; in an ideal world, the “interesting portion” of the data should be transferred first, with the rest following later, or not at all. Fortunately, the task of defining which portion is “interesting” lies well beyond the scope of this paper. 3. Commercial Electronic Data Processing 3.1. Background The one of the fundamental techniques of off-line data processing is known as “Master In, Master Out”, or “MIMO”. In a MIMO process, the file consisting of “yesterday’s” master file dataset is sequentially read along with a sorted list of transactions; if the unique identifier (e.g., account number) matches, the master record is updated (e.g., the account balance is adjusted by the amount of the transaction). In any event, the Trans- actions Yesterdays Master Todays Master Update Process Figure 3: MIMO Processing ETTC 2015 – European Test & Telemetry Conference (potentially modified) record is then written out to create a new master file. The virtues of this sort of approach are obvious: neither the old master file nor the transaction file are modified, so it is trivial to preserve coherent backups. And, since all the files involved are read or written sequentially, the data can conveniently be stored on magnetic tape, which was by far the most cost-effective media of the day. The drawbacks are also clear: regardless of the volume of transactions, the entire master file must be read and written. And the transactions must be aggregated and ordered (sorted) appropriately for the operation (e.g., sorted by account number). Current EDP practice would probably involve a relational database system (RDBMS), and resemble something like this: take a snapshot of the database, and then apply transactions as they come in. At the end of the day, take another snapshot. The obvious advantages of the real-time update perhaps mask a slightly more subtle advantage: since each incoming transaction is processed without sorting or aggregation, it is straightforward to implement multiple dataset updates, so that maintaining searchable records of transactions becomes practical. And, of course, if you can search transactions, you can address the archetypical “Big Data” questions, including the legendary concept of mapping rainfall locations by tracking the sale of umbrellas. Against which is the fact that trying to process each transaction “on the fly” introduces potential issues of “bursty” traffic skewing the performance requirements. 3.2. Observations about Complexity If one equates the workload of receiving a transaction (e.g. over a network connection) with the workload of storing a transaction, then the I/O complexity of the update process as whole can be described, for the “MIMO” case as: O(2n+m) where n is the number of records in the master file, and m is the number of transactions being added. For the modern OLTP approach, assuming a master file significantly larger than the transaction file, the complexity is substantially lower: O(m + m log n) However, while the total complexity is substantially greater with the MIMO case compared to the OLTP one, a critical observation is that the "momentary" complexity with MIMO is much, much lower: there is no updating of index trees, transaction locking or any of the other sophistication that one expects with a relational database. Stated another way, while the I/O complexity of the OLTP case is lower than that of the MIMO one, other measures of complexity favor the latter. Perhaps more important, though, is the recognition that if there is no pre-existing master file, then the MIMO case degenerates into O(2m) while the OLTP version becomes O(m + m log m) For the modern OLTP approach, therefore, it’s clear that several times as much I/O will be performed, specifically of the order of log m times that of a simple linear file approach like that of the MIMO operation. But since OLTP systems exist (to be quite accurate: they dominate the universe of installed systems), it’s clear that the additional I/O complexity is not inherently a bar on using these types of solutions. 4. Airborne Video 4.1. Background Beginning with the invention, in 19562 , of the practical video tape recorder right up to the widespread introduction of digital TV, almost all video, whether airborne or commercial, was recorded in the same manner: as a single “file’ on a linear storage medium (i.e. a tape). With the adoption of digital video, largely driven by digital broadcasting, nothing much changed; the video, not digitized, was stored as a single file on some media (initially tape, but non-linear media – e.g., “disks” – have been rapidly adopted). This form of digital video has minimal manageable metadata, usually limited to fairly generic date-and- location information, possibly with a few “event” tags stored as flashes or audio tones. Significant additional metadata may exist within the file, as video overlays, which are, of course, not amenable to machine searching (and therefore might be more properly considered “data” rather than “metadata”), and which by definition obscure parts of the video frame. However, following the creation within the US Department of Defense of NIMA (now NGA) in 1996, a concerted effort was made to standardize video formats and introduce a comprehensive digital metadata “ecosystem”. These efforts are managed by the Motion Imagery Standards Board (MISB). Largely as a result this effort, the dominant format for airborne video has been steadily evolving towards a common structure:  A set of fixed length records / “packets” containing all information in the file (or stream).  The records are a time series, subject to the limitations of the data sources: a video frame is a snapshot, while an audio channel is a continuous source; the snapshotted data trickles out over the interval between frames; this makes it hard to make conclusions about the “correct” ordering of packets.  Structural metadata for things like drop-out detection.  A defined 27MHz clock (actually, a counter) that synchronize amongst the various types of data (video, audio, etc).  Provision for multiple streams, both semi- independent and related, including video, audio and user-defined metadata. (The underlying use case for this capability originated with broadcast video, where ETTC 2015 – European Test & Telemetry Conference multiple camera angles and microphones might be broadcast over a single logical link.) 4.2. “User” Metadata One of the driving imperatives from the MISB was to eliminate “burned in” metadata, and replace it with machine-readable structures. By design, a system that permits arbitrary labels, or “keys” was selected, based on the “KLV” (Key – Length - Value) scheme adopted by digital broadcasters [1]. To avoid multiple keys being selected for the same logical entity, dictionaries are maintained (in particular, one by SMPTE and one by MISB); there is no requirement at the file level that any given key be used, but for interoperability the appropriate dictionaries should be employed. The presence of this comprehensive metadata has resulted in a sea-change in how the acquired video is archived. While previously it might have been cataloged by date and mission, now the whole metadata set can be indexed. So with a product like General Dynamics Mediaware’s “D-VEX” [2] system, an analyst can call up all video, from whatever source, that (for example) includes a particular latitude and longitude, which was shot in the past 6 months, from an altitude less than 20,000 feet. Critically, this capability is achieved by reading the file captured on board, and then building and maintaining a parallel database that provides positioning information (such as file name and offset) so that the specific location of the metadata can be found in the original file(s). There is no requirement to modify the source file, so the integrity of the original recording is preserved. 5. Evolving Onboard Storage Techniques 5.1. Lessons from Onboard Video It is clear that it would be relatively straightforward, in the abstract, to combine the digital video approach as used by the “D-VEX” software package, as described above, with files created by in the T&E realm. However, while this obviously has significant value for the processing of T&E datasets, it falls rather short in a number of areas. First, and possibly most importantly, it should be noticed that in the digital video case, what is being indexed is the metadata rather than the video itself (note that it is not uncommon to also index a thumbnail consisting of a reduced-size still image from the video, but this can be thought of as just another type of metadata). The metadata represents a significantly smaller volume than the related data (the video). But in the abstract T&E case, it is possible that one would want to index a significant proportion of each record, so it may well be more beneficial to simply important the onboard recording into a full-fledged RDBMS. So the likelihood is remote that comprehensive T&E recordings might achieve the same sort of dramatic improvements in data management that have been seen with video coupled with this sort of metadata. However, it is certainly possible, and it is eminently practical, to use the digital video metadata as a mechanism for embedding untraditional forms of metadata in a video stream. For example, a flight test monitoring ice build-up may employ a high definition video camera trained on the monitored aerodynamic surface. To the video being produced additional metadata can be added as custom “KLV” fields, effectively placing flight test measurands into the video stream, instead of placing the video into a flight test file format (such as Chapter 10). Which approach to take depends entirely on the circumstances of the test, and the infrastructure that exists to support the test program. Certainly, in many instances obtaining long-distance transport of a video stream might be easier to manage simply because of the presence of a mature broadcast market that routinely streams video around the world, via cable and satellite. But perhaps the most significant lesson from onboard airborne video collection is that there is significant value in designing a system where arbitrary metadata can be added to an otherwise fairly rigidly structured data stream. Such metadata could include diagnostic, statistical and “system health” data, but would generally be limited to metadata that more-or-less follows the data flow from the sources to the onboard recorder – -- this mechanism would be poorly suited for the configuration / setup data mentioned earlier. To avoid anarchy, discipline must be observed in the use of such metadata (e.g. the metadata must not be used to store data that has a predefined record format; and if a particular metadata key already exists, it must be used in favor of crafting a new, application specific key), but all of these issues are manageable: for new video metadata keys, the MISB claims to be able to approve a new key in a timeframe of the order of one week [3], which is the sort of agility this capability would need in order to ensure adherence to “the rules”. 5.2. Lessons from EDP At the end of the day, the primary objection to using an RDBMS is the performance penalty resulting from the increased complexity of database update operations compared to the sort of simple sequential operations used by the MIMO approach. Two strategies exist to address this objection: a) Factor into the design of the onboard storage system sufficient excess capacity that the performance penalty will be absorbed unnoticed. b) Alternatively, use a “lighter weight” database system, possibly one omitting capabilities such as the full relational data model. Considering the first point, it can be noted that recorder design has evolved over the past few generations so that architectures that were previously built precisely to deliver the required performance using “hard real-time” operating systems are now being replaced with systems that use general purpose hardware with carefully designed software. So it is not unreasonable to predict that onboard storage systems could be built to handle typical T&E applications. ETTC 2015 – European Test & Telemetry Conference But it is axiomatic that instrumentation systems are constrained by size, weight and power requirements. And therefore, for a given set of performance requirements the smallest, lightest and least power hungry design will generally be preferred over a more versatile one. But even if the performance issue is rendered moot, there is an additional issue with using a full-fledged RDBMS for the onboard data store: once a test is completed, it is typical that the acquired data be immediately transferred to a ground processing segment where it is merged with data from other tests. For this operation, using an RDBMS typically means that the data would have to be exported from the onboard database and re-imported into the consolidated one. This is of course a straightforward task, but it is almost always preferable to be able to use the original recording files, in much the same way as the “D-VEX” application can use the original video recording files. The second option is, for a number of reasons, rather more practical. One of the alternatives to a full RDBMS that has been considered is a hierarchical data format, specifically the HDF5 [4] package and the related NetCDF-4 [5] variant. 5.3. Hierarchical Data Formats HDF5, the related NetCDF-4 and their predecessors were designed by the scientific computing community: HDF5 at the National Center for Supercomputing Applications (NCSA) and several US Department of Energy laboratories, while NetCDF-4 originated with the Unidata organization at the University Corporation for Atmospheric Research (UCAR). While some of the design goals have less direct applicability to the T& E community than others (e.g., the abstraction of data types to allow portability across different computer architectures), the existence of a standardized format designed for large I/O volumes has significant value. This design background ensures that the format is well suited to acquiring time series data, such as from the sorts of experiments that major and national laboratories conduct: it is optimized for high-speed raw data acquisition. As government-funded developments, not only are the specifications available under a variety of “open source” license, but also published is a range of software libraries and tools to exploit the file formats. For the T&E community, a possible approach might be to use a single HDF5 “file”, where an HDF5 file is defined as a container for storing a variety of data, and contains two core types of objects:  Groups: a structure containing zero or more HDF5 objects, along with supporting metadata  Datasets: a multidimensional array of data elements, again with supporting metadata (Note that an HDF5 “file” is typically implemented as a system-level file, but can span multiple system files if required). Both HDF5 groups and datasets may have associated with them an attribute list, where an attribute is a user-defined HDF5 structure that provides additional information about an object. Obviously, the acquired data will be saved as a dataset, but perhaps less obviously the configuration metadata information (of the data sources and the overall system) could comfortably be represented in a separate dataset within the same HDF5 file. Attributes, meanwhile, provide a convenient mechanism to store the narrative and descriptive metadata (such as that traditionally stored in TMATS records and the like). Critically, this sort of metadata storage is orthogonal to the array of acquired data; this allows for the design of the metadata structures to be independent of the design of the data structures, thereby permitting the use of, for instance, the Chapter 10 record formats with enhanced metadata. Of course, the fact that an HDF5 file includes embedded structures to manage the various components of the format means that the process of writing into an HDF5 file will be more complex than writing to a simple flat file. However, it is equally true that writing to a flat file in a file system is more complex than writing to raw storage (such as would be the case with magnetic tape), so the decision boils down to what level of performance penalty is acceptable for a given level of enhanced functionality. One significant achievement of these hierarchical file formats is that information is stored in usable data types, and the types are described as part of the file format. So instead of having to know how to map a particular sequence of bits into a useful value (such as an integer), the file itself contains that information. As the data is transferred off the test vehicle, therefore, a fully coherent “package” containing:  the test data  the configuration metadata  the structural metadata  the descriptive metadata. All of this would then be exported intact. This has obvious advantage for the consumer of this data following the completion of the test. 6. Conclusion The emerging demand that onboard data storage support more sophisticated operations than simply writing (and occasionally reading) sequential data imposes a requirement for more sophisticated storage structures than are currently employed. The interest in onboard data mining suggests that file architectures that support fast ad hoc searching have significant value to the T&E design engineer. In-band and out-of-band metadata are also a key feature to be considered in onboard storage design. As the video exploitation community has demonstrated, adding metadata A key feature of the evolved data storage solution is broad support for comprehensive metadata at several different levels, including a level familiar to users of airborne digital video systems. ETTC 2015 – European Test & Telemetry Conference 7. Acknowledgements The author acknowledges with thanks the openness of the flight test community, in both industry and government across the civil and military marketplace, and their willingness to share not only the challenges of their work but also to entertain wide-ranging potential solutions to those challenges. The author would also like to thank his colleagues and predecessors at Ampex for the innovations that have helped make recording in harsh environments a solvable problem. 8. References [1] SMPTE 336-2007: Data Encoding Protocol Using Key-Length-Value [2] http://www.mediaware.com.au/D-VEX-video-exploitation, retrieved April 2015. [3] http://www.gwg.nga.mil/misb/faq.html#section3.5, retrieved April 2015. [4] http://www.hdfgroup.org/HDF5, retrieved April 2015. [5] http://www.unidata.ucar.edu/software/netcdf, retrieved April 2015. 9. Glossary EDP: Electronic Data Processing HDF: Hierarchical Data Format iNET: integrated Network Enhanced Telemetry I/O: Input/Output IRIG: Inter-Range Instrumentation Group KLV: Key-Length-Value MIMO: Master In, Master Out MISB: Motion Imagery Standards Board MPEG: Motion Picture Experts Group NCSA: National Center for Supercomputing Applications NGA: National Geospatial-Intelligence Agency NIMA: National Imaging and Mapping Agency OLTP: On-line Transaction Processing RDBMS: Relational Database Management System SMPTE: Society for Motion Picture and Television Engineers T&E: Test and Evaluation TMATS: Telemetry Attribute Standard UCAR: University Corporation for Atmospheric Research 1 Chapter 10 introduced a specific packet type with defined storage formats for this sort information in the 2011 version of the standard, but the point remains valid. 2 April 14, 1956: Ampex introduces videotape recorder at NARTB (precursor of the NAB) show in Chicago

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Cabin Comfort Flight Tests Installation - ETTC 2015 GALIBERT Joël1 , PLO Aymeric1 , GARAY Stephane1 1: AIRBUS Operation SAS, Toulouse, France Abstract: The verification of the cabin comfort requests an important amount of temperatures (surface and ambient) and air velocity measurements in the cabin at seat level, dado panels, air outlets, floor, lining, in the crew rests at the bunks, in galleys at attendant seats, on the monuments. On A350 MSN2, an amount of 1800 measurements was requested by the design office specialists. The innovation in a wireless data transmission was to satisfy the request of the specialists by proposing different standard kits (seat, aisle, bunk, cockpit, dado panel, air outlet) in order to cover a great density of measurements in the complete cabin (~400 measurements) and to reach simplicity in installation and removal of the kits. The agility to perform a new configuration and to relocate the different kits, allows the specialists to complete the tests in minimizing the number of flight tests. The development of powerful visualization screens increased the simplicity of the real time analysis for the specialist performing the test (thanks to the transmission of the datas from the central receiver to the flight test engineer station). Keywords: Flight Tests, Wireless, Comfort, 1. Introduction A350 XWB cabin comfort, Simpler and faster. The resources used to test the A350 XWB’s cabin comfort – in other words the wellbeing of passengers and crew owing to the temperature and speed of air flow – have made a technological leap forwards. 2. Comfort measurements requested One thousand eight hundred ambient air and surface temperature and air speed tests are needed for a cabin comfort test campaign, held over two weeks. By comparison, the flight test installation – excluding cabin comfort – counts 2,000 measurement points for a campaign that lasts several months. 3. Innovation based on wireless FTI A transnational flight test project team proposed several innovations designed to meet the requirements of the A350 XWB test campaign. "We needed simpler and faster means," to perform the measurements requested by the design office for cabin comfort tests. The main innovation consisted in a wireless solution which replaced the wired installation for transmitting collected data. For the first configuration on board A350MSN002, we have installed 140 kits to collect ~400 measurements, around the windows, at the level of the air outlets, in the galley and crew rest areas, in the cockpit and on the seats. Each kit consists in one or more airspeed measurement probes and/or a temperature sensor. The seat kits, clipped on to tubes simulating the heat given off by passengers, are equipped with four sensors located above the head and at head, knee and foot level. As the kits are mobile and fitted with a transmitter, it’s easy to move them around the aircraft for each configuration depending on what tests are required as the flights are progressing. Figure 1: Seat kit clipped on PAX heat dummy and surface temperature kit around the window ETTC 2015– European Test & Telemetry Conference Each kit equipped with a small transmitter (10mW, frequency band used: ISM 868 MHz IEEE 802.15.4 standard, communication protocol TDMA) sends the data sequentially to a central box located in the centre of the cabin. Then data are merged (IENA protocol) and transmitted via Ethernet link to the main Flight test installation for being displayed in real time and recorded for post data processing. Figure 2: Wireless measurement chain 4. Real time analysis Data can be viewed in real time at the test engineer’s station, making possible the correlation of outputs from aircraft systems and the results in terms of cabin comfort, even while the aircraft is in flight. Figure 3: Powerful visualisation screens for better real time analysis 5. Other kits developed To improve agility, simplicity and speed, different standard kits have been defined to cover a great many of measurements and to easily perform new configurations by relocating the different kits. Another improvement deals with the development of a solution for fastening the kits in place, without leaving any marks on the cabin lining. Figure 4: Air outlet kit Figure 5: bunk kit Figure 6: Attendant seat and aisle kit ETTC 2015– European Test & Telemetry Conference 6. Outlook The reactivity has been demonstrated with the application of 4 configurations performed during the cabin comfort flight tests between March and October 2014 on A350XWB-900 msn2 development aircraft. As feedbacks from Flight test Engineers and design office specialists were very good, this Cabin comfort Flight test installation will be re-used on the next A350XWB-1000 program and the wireless will be extended to other flight test measurement chains (other domains). 7. Conclusion All together, these advances have strong positive consequences for time, money and quality. The kits are configured off-plane and the time taken to install them has been reduced by 60%. As for financial aspects, the wireless system allows savings up to 30% per measuring channel. The kits can be reused for other programs, generating significant savings. Last, from quality point of view, configuring the installation is fast and simple, measurement points in the cabin are traceable and flight tests optimised. But one of the most important things is that these innovations contribute to improving the cabin product’s maturity at the “Entry Into Service” in companies. 8. Acknowledgement We would like to thank Bremen, Hamburg and Toulouse AIRBUS teams who contributed to this project and brought it to success. 7. Glossary CCFTI: Cabin Comfort Flight Test Installation IEEE: Institute of Electrical and Electronics Engineers IENA: Test Installation for New Aircraft, in French “Installation d’Essais Nouveaux Avions” ISM: Industrial, Scientific and Medical TDMA: Time division Multiple Access

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference The research on wireless sensor network for the aerocraft measurement system Juan LU1 , Ying WANG1 , Bingtai LIU1 1: Beijing Institute of Aerospace Systems Engineering, No.1 Nandahongmen road, Fengtai District, Beijing, 100076 Abstract: WSN (Wireless Sensor Network) technology is an important and prospective development direction for electronic measurement and data acquisition. In this paper, the project objective is to build a high reliable WSN for the aerocraft measurement system. Firstly, our application requirements and the system framework are briefly illustrated. A wireless link model is proposed with the fully consideration of the complex electromagnetic environment. The simulation and experiment results are demonstrated and analyzed. Another contribution is a proposed network capacity model. Through an adaptive slot allocation strategy, the system reliability is guaranteed and the system efficiency has been optimized. Keywords: Wireless sensor network, electromagnetic environment, adaptive communication protocol 1. Introduction Environmental parameters obtained by sensors have great significance to master the aerocraft states, analyze the malfunctions, ameliorate the design, etc. However, the large numbers of sensors in the aerocraft are connected with cables currently, including power supply cables and signal output cables. Therefore, the measurement system are quite complex and cumbersome, which makes the effective load of aerocraft are extremely limited. Some parameters could not be obtained or effectively obtained by sensors as the connected cables are difficult to be disposed in the execrable environment. Moreover, when adding new sensors or adjusting the sensors’ place, sometimes these cables need to be designed and produced again and again, which costs tremendous time and resource. Based on the above problems, the novel technology is expected. Fortunately, with the development of sensor technology, embedded technology and communication network technology, WSN can realize the electronic measurement and data acquisition without cable. Therefore, WSN could be helpful to the development process of aerocraft in terms of technical state change and control. 2. WSN for the aerocraft 2.1 State of the art WSN have drawn a great attention in many application fields to obtain information from the physical environment, to perform simple processing on the extracted data and to transmit it to remote locations [1] [2]. In our application, WSN is adopted as part of aerocraft measurement system. The aerocraft parameters such as temperature, pressure, vibration, etc. are sensed, processed and at last sent to the ground. As far as we known, some units such as NASA, Jet Propulsion Laboratory (California Institute of Technology), etc. have implemented WSN for the aerocraft measurement system while the details are unclear. What’s clear is that WSN technology is for the first time adopted to obtain the aerocraft environment parameters in domestic domain. 2.2 Project objective The objective is to build a reliable WSN for the aerocraft measurement system, which is expected to simplify the design, be easy to install and maintain, and satisfy the various measurement requirements flexibly. More precisely, the following factors will be fully considered during the system design.  Configure the system with flexibility;  Shorten the period of development;  Reduce the cable break problem;  Decrease the weight of aerocraft. 2.3 DSWA system In this paper, a WSN is proposed as DSWA (Data Sensor and Wireless Acquisition) system, as shown in figure 1. Wireless convertor The controller node The manager node VPX demoboard Transmitter DSWA system WSN on board Console on the ground Temperature sensor Pressure sensor Vibration sensor Wireless convertor Wireless convertor Figure 1: DSWA system framework DSWA system contains a controller node, a manager node and the wireless sensors. On board, the star topology is ETTC 2015– European Test & Telemetry Conference adopted to guarantee the communication reliability. The controller node is wired connected to the VPX measurement demoboard. The manager node is working on the ground for system test and display. Only DSWA system will be discussed and other parts is out of the scope of this paper. 2.3.1 Sensor node in DSWA The sensor node in DSWA has the following functions:  The ability to sense environmental signal A lot of mature sensor module including temperature, pressure, vibration, etc. could be applied depending on the specific measure requirements.  Signal conditioning function The micro controller could make A/D converter gather the output with signal conditioning.  Set and read TEDS (Transducer Electronic Data Sheets) The TEDS in DSWA is shown in the table below. Table 1: Sensor TEDS Parameter Content 1 Sensor identification parameters Manufacture Code number Serial number Version number 2 Sensor parameters Type Range Sensitivity Bandwidth Unit Accuracy 3 Calibration parameters The last calibration data Correction coefficient 4 Application parameters Channel number Position Direction  Coding and arithmetic functions The micro controller could form the collected sensor information, battery voltage information and the wireless signal strength information into the communication frame.  Wireless transmission function The sensors can communicate with the controller node and the manager node with the wireless transmission function.  Internal management function The sensors could realize the power supply management, voltage detection, mode switch and other functions. 2.3.2 Controller node in DSWA The control node in DSWA has the following functions:  DSWA system establishment The sensors should register to the controller. After the success of registration, the controller will allocate the communication slots to the sensors and finish the system establishment.  Wireless transmission and reception In one hand, the controller node receives the synchronous signal from the VPX measurement demoboard and forwards the information to the sensors. In the other hand, the controller node receives the sensor information from all the sensors and uploads to the VPX measurement demoboard after the processing.  Other control functions The controller node could communicate with the manager node through both wired and wireless interface in order to control the sensor parameter configuration. 2.3.3 Manager node in DSWA The manager node in DSWA has the following functions:  Wireless communication function The manager node can communicate with both the sensors and the controller node. However, once the controller node starts to work, the manager node will only monitor the DSWA system.  Sensor management function The manager node can configure the sensor parameter and change sensor mode.  Controller management function The manager node can also configure the controller node and change its mode.  Data processing function In the system test phase, the manager node could collect the sensor data, process and at last display at the terminal. 2.4 Challenges in our application Nowadays, the WSN technology has been academically researched by a lot of units and been widely used in many fields. However, our application has the larruping characteristics. Firstly, the aerocraft is a complex electromagnetic system which consists of various high-power and high-sensitivity devices in the airtight space. Therefore, the biggest challenge is the electromagnetism compatibility. In one hand, the DSWA system could not disturb the other systems in the aerocraft. In the other hand, as the DSWA system is a low-cost low-power system, it should not be disturbed by the other systems. Secondly, the biggest difference between a WSN and DSWA is the application background, especially on how to make DSWA melt into the existed system work flow. Therefore, the feasible and reasonable protocol is quite important for the DSWA system. In the following part 3 and part 4, these two topics will be further discussed. 3. Wireless link design 3.1 Wireless link model ETTC 2015– European Test & Telemetry Conference The DSWA system is consisting of a physical layer based on the IEEE 802.15.4 standard [3]. It works on 2.4GHz and has 16 channels. The data rate is 250kbps. Furthermore, dynamic channel selection, direct sequence spread spectrum and frequency hopping technology is applied to strengthen the interference immunity of the DSWA system. 3.2 Simulation work and the result In order to investigate the electromagnetic environment of the aerocraft, a typical application scenario is simulated by E-MIND [4]. In the simulation, both large scale fading and small scale fading are considered [5] [6]. The adoptive formulas are shown below: SddPLdPL  )/log(10)( 00  (1) )()()( 2, 0 22,1, 0 11, k N k kk M k k TtTtth     (2) In the formula (1), γ is the pass loss exponent, PL0 is the pass loss at d0 (reference distance),S is the shadow effect and obeys the Gauss distribution with the standard deviation δ. In the formula (2), βk, 1 and βk, 2 is the multipath amplitude of the first cluster and the second cluster, T1 and T2 is the arriving time of the first cluster and the second cluster. T1=τ0,1,T2 =τ0,2,T2 -T1=T. T is a fathomable constant.τk, 1 and τk, 2 is the arriving time of the k multipath in the first cluster and the second cluster, they obey the Poisson distribution. M and N are the number of multipath in the first cluster and the second cluster. As shown in figure 2, a controller node and two sensor nodes are disposed in the aerocraft. Figure 2: Simulation scenario One of the simulation results is about the antenna, as shown in figure 3 and figure 4, which indicates that the physical layer could satisfy our application scenario. The interferences between the devices in the aerocraft are acceptable. The DSWA system could work successfully without data loss. Figure 3: Antenna of the controller node Figure 4: Antenna of the sensor node 3.3 Implementation work and the result The simulation work is obviously not sufficient to verify the electromagnetic compatibility of the aerocraft. So the following experiments are dealt with DSWA.  High temperature environment A controller node and two sensor nodes are placed in the temperature box at 75℃ for 2 hours. The experiment is repeated for 20 times and the BER (Bit Error Rate) is 0.  Low temperature environment A controller node and two sensor nodes are placed in the temperature box at -40℃ for 2 hours. The experiment is repeated for 20 times and the BER is 0.  Flight vibration environment A controller node and two sensor nodes are fixed on the vibration platform with the vibration requirement (grms=18.2) shown in table 2. The experiment is repeated for 20 times and the BER is 0. Table 2: Vibration requirement 20Hz~80Hz +6dB/oct 80Hz~400Hz 0.14g2 /Hz 400Hz~2000Hz 0.18g2 /Hz Time 4min×20  Flight impact environment A controller node and two sensor nodes are fixed on the vibration platform with the impact requirement shown in table 3. The experiment is repeated for 20 times and the BER is less than 2.99*10-7 . When the cushions are installed for the devices, the BER becomes 0. Table 3: Impact requirement Haft sine wave (g) 90 90 90 ETTC 2015– European Test & Telemetry Conference Time (ms) 1.0 1.0 1.0 Direction X Y Z Times 20 20 20  Cabin environment emulation A controller node and two sensor nodes are placed in the sealed cabin. Many scenarios are experimented to make the multipath transmission available. The results show that the devices can communicate with each other successfully (30min) and the BER is 0.  Interference environment In the last experiment, a transmitter with variable power and different places joins the sealed cabin. As the frequency hopping of DSWA system, the BER is 0. 4. System protocol design 4.1 DSWA application requirements Reliability is the first requirement of the application, so the algorithm based on TDMA strategy is proposed. The communication slot should be allocated adaptively and obey the following formulas:  LfffM SLH /)(  (3)   RLLffffff HRSLHSLH  /)( (4) In the above formulas, fH is the sampling frequency of high frequency sensor data, fL is the sampling frequency of low frequency sensor data, fS is the sampling frequency of the slowly changing sensor data. The L is the maximum data frame length allowed by the physical layer and the LHR is the frame overhead. The R indicates the data rate and the number of slots in 1 second is M. 4.2 Protocol and system work flow Wireless sensor nodes have “shutdown” and “running” two working conditions. In the “running” condition, sensors can switch between the sleep mode, standby mode and work mode, as shown in figure 5. Work mode Sleep command register successfully and receive standby command Register successfully and receive sleep mode/register failure Send beacons periodically Stop to send beacons Pin 1 and pin 5 connected Pin 1 and pin 5 disconnected shutdown Running condition Shutdown condition Send beacons periodically Standby mode Sleep mode Figure 5: Sensor node work protocol The controller node also switches between the sleep mode, standby mode and work mode, as shown in figure 6. It starts to work once the synchronous signal is received from the VPX measurement demoboard. Only the manager node through a USB interface could make the controller node from standby mode to sleep mode. Work mode synchronous signal from VPX Sleep mode Standby mode synchronous signal from VPX Nothing received from VPX USB control command No command Figure 6: Controller node work protocol 5. Conclusion This paper aims to explore the feasibility of WSN technology to the aerocraft measurement system. Based on a brief introduction of the project background and project requirements, the advantages of WSN have been fully elaborated. A novel DSWA system was proposed in this paper, the system framework and system constitutions were illustrated. The contribution of this paper is: on the one hand, the system link was designed, simulated and experimented. The results show the good performance of DSWA. Moreover, the electromagnetic environment of the aerocraft could be further studied. On the other hand, through the analysis of the application, a network capacity model was proposed. The work protocols for both the sensor node and the controller node were presented and could well work in DSWA. The next step is to participate the flight experiment in reality. 6. References [1] Akyildiz I. F.: "A survey on sensor networks", IEEE Communications Magazine, 2002, 40(8), P102~144. [2] Xiaowan H. el.: "Using integer clocks to verity the timing-sync sensor network protocol", NFM Conference, 2010, Apr.13~15. [3] IEEE 802.15.4 standard. [4] E-MIND datasheet. [5] Saleh A.: "A Statistical Model for Indoor Multipath Propagation", IEEE Journal on Selected Areas in Communication, 1997, 5(2), P128~137. [6] Jalden N.: "Correlation properties of large scale fading based on indoor measurement", IEEE Wireless Communications and Network Conference, 2007, Mar.11~15.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

- How to Harness the Value of Iot / Fast Data / Big Data and Data Analytics Technologies for the Tests Community - Frédéric Linder and Stéphane Biguet – Oracle – France There are many opportunities to extract value(s) from the data generated in the connected Aerospace & Defense world, but the rapid growth in the number of intelligent devices presents many challenges, and has a significant impact on the architecture at the convergence of IoT services, mobile, cloud, big data, and analytics. Oracle will present a complete life cycle from physical world to analytics: insights into a holistic solution. A robust and scalable infrastructure is needed that is always on and can handle massive amounts of data, transforming it for immediate business value. These end-to-end requirements include greater device flexibility, comprehensive security, device analytics and device dynamic intelligence. Couple that with a purpose-built enterprise solution that incorporates business intelligence, big data, and analytics to help enable decisions faster than ever for tests engineers.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Big Data Applications For Telemetry Greg Adamski, Gilles Kbidy L-3 Communications Telemetry-West 9020 Balboa Avenue, San Diego, CA 92123 Abstract: Telemetry processing systems collect vast amounts of data, which must be made available to users as quickly as possible and safely kept for as long as possible. Traditional data storage methodologies are quickly being replaced with modern solutions commonly referred to as Big Data and based on NoSQL databases, which have started to appear in industry over the last few years. This paper aims to demonstrate how relevant and applicable these new technologies are to the world of telemetry and data acquisition. Keywords: Big Data, Telemetry, NoSQL, benchmarks 1. Introduction The advent of Big Data and NoSQL started a revolution in the world of databases, providing new solutions to age-old problems. Why is this technology relevant and how can it be applied to telemetry data? 2. Big Data, NoSQL and Telemetry 2.1 Telemetry data systems In telemetry systems, what used to be addressed with hardware solutions is now expected to be resolved in software alone. Results originally stored on magnetic tapes or removable disks are now expected to always be on-line and available for analysis. Users from geographically diverse locations expect that data from multiple years of test flights or control center operation be readily available to better understand test results and more accurately predict future performance - including the possibility of anomaly or failure. 2.2 Big Data Overview Over the last few years, data requirements in other industries have increased to unforeseen levels, requiring a technology leap in data storage and management. The resulting technologies such as Big Data, NoSQL, Map/Reduce are more than just buzzwords: they represent a paradigm shift in the design of data storage and analysis systems from ultra-reliable and very expensive to fault tolerant, massively distributed, flexible and frequently open source data processing clusters. Content-aggregating companies such as Google, Yahoo, Facebook, and Twitter realized early on that traditional Relational Database Management Systems (RDBMS) were ill-suited for the type of data and queries required by modern web applications. Indeed, although still the norm for most databases out there, RDBMS solutions come with their share of constraints and problems: • Large scale data sets must reside on expensive and specialized servers, driving high hardware costs and scalability issues. • Database schemas must be defined at the beginning of the project and cannot easily adapt to ever changing needs in data content and queries. • Data layout in relational databases cannot be controlled very well, which leads to poor performance when a large number of records needs to be processed. Around 2004, Google pioneered a new engineering adventure and set out to develop a replacement to SQL focusing on distributed data and massive parallel queries. The initial design was called “Big Table” and was a technology proprietary to Google. This breakthrough opened the door to many more technical solutions expanding on the initial concept of low cost, massive parallel processing and fault-tolerance. The term Big Data was coined around that time and according to Wikipedia now designates “data sets so large or complex that traditional data processing applications are inadequate”. 2.3 NoSQL Overview As part of the Big Data phenomenon, the term NoSQL appeared around 2008 to designate database systems not based on SQL or often referred to as Not-Only SQL. There are many different types of NoSQL implementations available today but most share these common features: • Massively distributed: one the key concepts of NoSQL databases is horizontal scalability where data can be partitioned across thousands of nodes and adding new nodes to the system for growth or replacement is inherently simple. In terms of performance, the built-in distributed design provides linear scalability and some NoSQL implementations such as Cassandra claim linear performance gains when new nodes are added to the cluster. • Fault-tolerant: failure of a node in the infrastructure does not impact data integrity and availability. Consistency and replication levels are often tuneable. • Low cost: designed to run on low cost commodity hardware nodes. • Flexible: schema-less and unstructured designs allow collocation of different data elements and evolution of data attributes over time. ETTC 2015– European Test & Telemetry Conference There are different types of NoSQL databases. The most common ones are: • Column-oriented: similar to SQL in concept but less rigid, where a given column may appear in one row of the table but not others. • Key-Value: data is treated as a collection of objects which may include different fields for every object • Document: a special type of Key-Value DB with more visibility in the data and metadata being stored There are other types of NoSQL databases but these are the most popular options. Of the three, we focused on column-oriented and document-oriented implementations during our investigation. Although NoSQL is not the answer to everything, its benefits are increasingly evident as the data set grows. According to DataJobs.com, the performance of a traditional RDBMS decreases rapidly based on volume: Figure 1: NoSQL scalability vs. Traditional RDBMS 2.4 System Architecture Big Data systems scale from a simple network infrastructures (primary/backup servers) to large scale, distributed systems across multiple nodes (in many cases thousands) and physical locations. They provide enormous scalability but also present unique challenges. One of the key advantages of a Big Data infrastructure is its ability to run on commodity hardware and easily grow over time as more hardware becomes available. Therefore there can be any number of production environments suitable to a Big Data deployment. However, making the right decisions in terms of cluster architecture and configuration can be challenging and error-prone with a NoSQL solution, more so than with SQL. This is largely due to the fact that SQL is data model centric (i.e. data elements and their relationships drive schema design) whereas NoSQL databases tend to be designed around use cases where the focus is on how data will be accessed and used. When planning the system architecture, it is important to consider write vs. read performance (which one is more important to the application), collections, aggregation and typical queries. For example, with a NoSQL database, data normalization is no longer a requirement for an efficient design. In some cases, it is better to de-normalize the data and allow some duplication for read performance. This is just one of the many design strategies that can seem counter intuitive coming from an SQL world. Reaching peak efficiency in a NoSQL implementation requires a different thought process in the planning phase and lots of experimentation and fine tuning of the configuration parameters, cluster organization, replication strategies and so on. In our tests, we were often quite surprised to observe significant differences in read/write performance by simply tweaking or altering our schemas and driver strategies. Some of these results are discussed later on in this paper. 2.5 Solutions available in the market There are numerous technologies available in the market today but with various degrees of applicability to telemetry data storage, processing and analysis. For our investigation, we focused on two of the major players: Cassandra and MongoDB. Cassandra is one of the leading column-oriented NoSQL databases whereas MongoDB offers a document-oriented solution. These two products are widely used in industry and seemed to be good choices for telemetry applications. 2.4 Possible applications for telemetry Telemetry systems typically generate a massive amount of data collected throughout the lifecycle of a mission. Regardless of the type of telemetry data collected, there is a common pattern where data must be made available as quickly as possible and for as long as possible. Be it in flight test campaigns, instrument bench test or satellite on- orbit telemetry, it is paramount to transfer the data from its source to persistent storage where it can be available to users in near real-time. To achieve this goal, a few factors must be considered: • Latency: delay between the time a data sample is received from the telemetry system to the time where it is retrievable from the archiving system • Throughput: write/read performance to insert or pull data records from or into the archiving system • Size: collecting telemetry data over multiple years creates interesting challenges in terms of scalability. Ideally the archiving system should preserve read performance across the data set (i.e. retrieving 5-year old data should not take longer than 6-month old data) and overall performance should not degrade as the data set grows. Latency is usually not an issue using a NoSQL database. Unlike file-based storage where data being recorded to the current file may not be available until the file is closed or its metadata attributes are updated, writing individual records to a NoSQL database (or to an SQL database for that matter) can be performed in small batch operations and usually results in shorter latencies. ETTC 20 However throughput can be more challenging. In traditional archiving systems, file-based solutions usually provide the highest write performance and are typically faster than database inserts. That is certainly the case with traditional SQL databases anyway. NoSQL on the other hand presents an interesting perspective with regards to write performance scalability. One of the key goals of the Big Data revolution was to easily handle very large amounts of data (the Apache HBase documentation refers to “hundreds of millions” of rows), and therefore solutions with very high write and read throughput have started to appear. The Cassandra documentation claims that “write and read performance scale linearly with the addition of nodes to the cluster”. It would seem that such a design would be ideally suited for a telem system where write throughput is based on the number and rate of telemetry streams being archived. On paper, it certainly appears that such a design would be superior to traditional file-based or SQL solutions where linear performance scalability is rarely available. Cassandra's linear scalability below illustrates this concept in terms of operations per second. Figure 2: Cassandra's linear scalability Finally in terms of footprint (data size on disk), NoSQL also brings very strong arguments to the discussion for similar reasons as the ones stated in the paragraph The ability to grow the cluster size by simply adding more nodes (while the cluster is online and operational) greatly simplifies disk space planning and management concerns. An organization may choose to start with a small configuration consisting of a few nodes and grow cluster as needs arise. Building a heterogeneous cluster of nodes is usually supported and lets the customer use state of-the-art equipment as technology progresses over the years. NoSQL databases can offer unprecedented levels of performance in terms of data latency, write/read throughput, scalability, and amount of data collected. The challenges reside in the correct design and implementation choices to achieve peak results given a particular use case and application. 3. NoSQL Experiments 3.1 Goals Archiving of telemetry data poses a unique set of challenges: data is normally downlinked in a very compact, multiplexed fashion: ETTC 2015– European Test & Telemetry Conference ghput can be more challenging. In solutions usually and are typically abase inserts. That is certainly the case with traditional SQL databases anyway. NoSQL on the other ve with regards to write performance scalability. One of the key goals of the Big Data revolution was to easily handle very large amounts of data (the Apache HBase documentation refers to “hundreds of millions” of rows), and therefore igh write and read throughput have started to appear. The Cassandra documentation boldly and read performance scale linearly ”. It would seem that such a design would be ideally suited for a telemetry system where write throughput is based on the number and rate of telemetry streams being archived. On paper, it certainly appears that such a design would be superior to based or SQL solutions where linear arely available. Figure 2: below illustrates this concept : Cassandra's linear scalability Finally in terms of footprint (data size on disk), NoSQL also brings very strong arguments to the discussion for paragraph above. The ability to grow the cluster size by simply adding more is online and operational) greatly simplifies disk space planning and management concerns. An organization may choose to start with a small configuration consisting of a few nodes and grow its terogeneous cluster of es is usually supported and lets the customer use state- art equipment as technology progresses over the NoSQL databases can offer unprecedented levels of performance in terms of data latency, write/read data collected. The challenges reside in the correct design and implementation choices to achieve peak results given a Archiving of telemetry data poses a unique set of ly downlinked in a very A B C A B C A B C A B C Minor frame1 Minor frame n Minor frame3 Minor frame2 ... Figure 3. Multiplexed parameter layout Users very rarely request all of the parameters when querying their data store. What this means is as the data is read from disk, the read head must move frequently to reach places where data points requested by the user are located. Consider the following: for traditional drive, a transfer rate can be anywhere between 80 and 150MB/s when reading a continuous block of data. metric changes dramatically when reading smaller chunks: at 4kB per chunks, speed on the order of 0.5 to 1MB/s are the norm. When using solid state storage those numbers look better for smaller reads, but when considering that over the lifetime of a satellite multiple terabytes of data would be collected, the cost of the storage subsystem becomes too high for many users. Ideally, the data should be laid out on disk in the way that most closely matches the way users ask for it, which in many cases means grouping the same measurands , grouping data from the same test runs and in general dispending with the multiplexing. A B G F E C D A B G F E C D Figure 4. Grouped parameter layout Now, one must ask the question: what criterion should be used when laying data out on disk. Measurand name seems like a sensible one, since users will frequently ask for a data plot for a specified time period. Others may want to take specific test runs into account, others a logical grouping within their telemetry stream specification (e.g. APID). At the end of the day many users will have different use cases for their data which will drive the layout of the data on disk. An important difference between a traditional relational DB is that traditional databases were focused on schema consistency and correctness as described by relational algebra. Redundancy was frow upon and performance issues were typically resolved by creating a higher level constructs such as indexes. Many NoSQL databases approach this form a different angle: rather than focusing on the data model, they focus GFED GFED GFED GFED . Multiplexed parameter layout request all of the parameters when querying their data store. What this means is as the data is read from disk, the read head must move frequently to reach places where data points requested by the user are Consider the following: for traditional magnetic hard drive, a transfer rate can be anywhere between 80 and 150MB/s when reading a continuous block of data. This dramatically when reading smaller chunks: at 4kB per chunks, speed on the order of 0.5 to ing solid state storage those numbers look better for smaller reads, but when considering that over the lifetime of a satellite multiple terabytes of data would be the storage subsystem becomes too ata should be laid out on disk in the way that most closely matches the way users ask for it, which in many cases means grouping the same measurands , grouping data from the same test runs and in general A B G F E C D . Grouped parameter layout the question: what criterion should be on disk. Measurand name seems like a sensible one, since users will frequently ask or a specified time period. Others may into account, others a logical grouping within their telemetry stream specification (e.g. APID). At the end of the day many users will have different use cases for their data which drive the layout of the data on disk. An important difference between a NoSQL database and a traditional relational DB is that traditional databases were focused on schema consistency and correctness as described by relational algebra. Redundancy was frowned upon and performance issues were typically resolved by creating a higher level constructs such as indexes. SQL databases approach this form a different angle: rather than focusing on the data model, they focus ETTC 20 on how the data is being used. Table and index design will thus drive the way data is laid out on disk and within the data center allowing the user to customize their system to best suit their needs. One of the goals of our tests was to determine how well different databases behave under optimal conditions and how much flexibility we would get in designing the storage to match our user needs. While model driven design did not perform as well under optimal conditions (i.e. data requested together together), it allowed for greater flexibility and decent performance under any conditions: add an index and most queriers will work reasonably well. An important outcome of this test is to see how the system behaves when the data layout does not match what the users asks and less optimal queries need to be used. 3.2 Test configurations In our testing we successfully installed and ran instances of Cassandra and MongoDB on various hardware platforms. Single node configurations included: • Laptop with i7 processor, 16GB of RAM and SSD drive • Desktop workstation with i7 processor, 16GB of RAM and SATA drive • Server with quad-core Xeon x5560 32GB of RAM and RAID-5 SCSI drives Multiple node configurations consisted of: • 3 Virtual machines with virtual dual cores, 8GB of RAM and virtual disks All of our test configurations had at least 50GB of disk space available for our tests. Invariably, installing Cassandra or MongoDB on these platforms was trivial. Both products are fairly self contained and installation takes a couple of minu The following product versions were used for these tests: • Cassandra 2.0 • Cassandra 2.1 • MongoDB 2.6 • MongoDB 3.0 The following driver versions were used: • Cassandra Java Driver 2.1.4 • Cassandra Python Driver 2.5.1 • MongoDB Java Driver 3.0.0 In both cases, driver usage and documentation were very good and easy to adopt. For Cassandra, additional libraries were required: • netty 3.9.0 • slf4j 1.7.5 • metrics-core 3.0.2 MongoDB on the other hand had no other dependencies. 3.3 Data modelling considerations for Cassandra ETTC 2015– European Test & Telemetry Conference le and index design will thus drive the way data is laid out on disk and within the data center allowing the user to customize their system to One of the goals of our tests was to determine how well optimal conditions and get in designing the While model driven design did not perform as well under i.e. data requested together is stored greater flexibility and decent performance under any conditions: add an index and most queriers will work reasonably well. An important outcome of this test is to see how the system behaves when the data layout does not match what the users asks and less-than- In our testing we successfully installed and ran instances various hardware , 16GB of RAM and Desktop workstation with i7 processor, 16GB of x5560 processors, drives 3 Virtual machines with virtual dual cores, 8GB All of our test configurations had at least 50GB of disk Invariably, installing Cassandra or MongoDB on these platforms was trivial. Both products are fairly self- installation takes a couple of minutes. The following product versions were used for these tests: In both cases, driver usage and documentation were very For Cassandra, additional libraries were required: MongoDB on the other hand had no other dependencies. for Cassandra From a modelling standpoint, on the surface Cassandra looks very similar to a traditional SQL database. The main collections are called tables and data is inserted into rows which consist of a primary key and column values. Cassandra also offers a query language similar to SQL, called CQL (Cassandra Query Language). Figure 5: Cassandra Data Model But despite many similarities to Cassandra’s underlying data model is radically different and offers certain advantages not found in SQL. For example, Cassandra is ideally suited to model time data (i.e. time stamped data usually received in sequence) and therefore it is a great solution for telemetry data. data model allows the creation of a primary key which determines data ordering on a physical node. So for example the following primary key efficiently query parameter values by time and id f database: PRIMARY_KEY (timestamp, param_id) Using the primary key above guarantees sequential reads from the physical node, resulting in great performance gains. The first column(s) of the primary key can also be designated as the partition key which dictates how data is organized on nodes in the cluster. This is a very powerful method to distribute processing loads on multiple nodes and ensure fast read cycles. Aside from these advantages, Cassandra offers built compression with barely noticeabl degradation. Multiple algorithms are available and can be enabled at table creation. 3.4 Data modelling considerations for MongoDB Data modelling in MongoDB was significantly diffe compared to Cassandra. MongoDB is a document based database, which drives how storage is designed and how data was stored inside of each document. Figure 6. Mongo is a document based DB delling standpoint, on the surface Cassandra to a traditional SQL database. The main ata is inserted into rows of a primary key and column values. fers a query language similar to SQL, called CQL (Cassandra Query Language). : Cassandra Data Model many similarities to its SQL cousin, Cassandra’s underlying data model is radically different and offers certain advantages not found in SQL. For Cassandra is ideally suited to model time-series data (i.e. time stamped data usually received in sequence), solution for telemetry data. Its data model allows the creation of a primary key which determines data ordering on a physical node. So for example the following primary key could be used to efficiently query parameter values by time and id from the PRIMARY_KEY (timestamp, param_id) Using the primary key above guarantees sequential reads from the physical node, resulting in great performance The first column(s) of the primary key can also be ch dictates how data is organized on nodes in the cluster. This is a very powerful method to distribute processing loads on multiple nodes Aside from these advantages, Cassandra offers built-in compression with barely noticeable performance Multiple algorithms are available and can be considerations for MongoDB Data modelling in MongoDB was significantly different oDB is a document based which drives how storage is designed and how data was stored inside of each document. . Mongo is a document based DB ETTC 20 Database schema defines only high level collections, which are lightweight containers for documents. Documents within a collection do not need to share the same layout or schema: each has enough metadata to be self-describing. While this allows great flexibi means that the overhead associated with each document is fairly significant. This has to be taken into account when designing storage for telemetry data. Mongo lacks data clustering capabilities: indexes are created on top of the existing data store and are largely independent of the actual data layout. For a high speed data storage solution, documents will typically hold multiple data samples. Various strategies can be employed when grouping samples within individual document both from the data model perspective (embedded documents, collections of samples) as well as access perspective: time ranges stored within documents, lists of different measurands stored etc. Even when considering that each document may contain multiple samples, over time a large number of documents will be created. Indexes are critical in ensuring quick data retrieval times. 3.5 Test result overview During our tests we have been focusing on storing decommutated parameter values: each sample contained the following: - Raw value - Uncalibrated value - Calibrated value - Timestamp - Status information Our reference test streams resulted in approximately 100k samples per second per 1Mb/s of raw data. For data retrieval test we used a pre-generated 1 year data with a database of ~3k parameters. The data rate used resulted in approximately 500 samples per second for duration of the entire year, which resulted in ~15 billion samples in the final data set. 3.6 Test results for Cassandra 3.6.1 Archiving Being a column-oriented database, we found that Cassandra required many CPU cycles to store incoming telemetry data. On our laptop test bed, the maximum archiving telemetry rate was approximately 100k per second which corresponded to a 1Mbps raw da stream. ETTC 2015– European Test & Telemetry Conference high level collections, which are lightweight containers for documents. collection do not need to share the same layout or schema: each has enough metadata to be describing. While this allows great flexibility, it also overhead associated with each document is to be taken into account when Mongo lacks data clustering capabilities: indexes are created on top of the existing data store and are largely solution, documents will typically hold multiple data samples. Various strategies can be employed when grouping samples within individual document both from the data model perspective (embedded documents, collections of samples) as well as ve: time ranges stored within documents, Even when considering that each document may contain multiple samples, over time a large number of documents will be created. Indexes are critical in ensuring quick data During our tests we have been focusing on storing decommutated parameter values: each sample contained esulted in approximately 100k generated 1 year data with a database of ~3k parameters. The data rate used resulted in approximately 500 samples per second for the the entire year, which resulted in ~15 billion oriented database, we found that Cassandra required many CPU cycles to store incoming telemetry data. On our laptop test bed, the maximum 100k samples per second which corresponded to a 1Mbps raw data Figure 7. Cassandra storage test bed When using a Cassandra cluster of 3 machines (2 CPU cores each) and a 2 core server, we were able to start a full telemetry chain including simulator, decom and an archiver, we get to 3Mbps (300k samples per second) data rate using approximately 30% of CPU of the data processor machine and a not more than 10% on each of the Cassandra cluster nodes. 3.6.2 Retrieval We tested various retrieval scenarios, with resulting retrieval rates of approximately 500k second. 3.6.3 Further testing Additional testing is required to determine the level of scalability that can be achieved with Cassandra. Our test setup consisted of only one data generation machine which limited our ability to test how far we our existing cluster and what effect adding additional nodes would have on the system’s ability to handle more data streams. 3.7 Test results for MongoDB Figure 8. MongoDB test bed 3.7.1 Archiving We ran our MongoDB tests with data rates up to 2Mbps (200k samples per second). With tuning, we were able to achieve similar results with MongoDB as with Cassandra, with the exception of higher CPU utilization on the . Cassandra storage test bed When using a Cassandra cluster of 3 machines (2 CPU and a 2 core server, we were able to start a full telemetry chain including simulator, decom and an get to 3Mbps (300k samples per second) data rate using approximately 30% of CPU of the data processor machine and a not more than 10% on each of We tested various retrieval scenarios, with resulting approximately 500k-600k samples per Additional testing is required to determine the level of scalability that can be achieved with Cassandra. Our test setup consisted of only one data generation machine test how far we could push our existing cluster and what effect adding additional s ability to handle more . MongoDB test bed We ran our MongoDB tests with data rates up to 2Mbps (200k samples per second). With tuning, we were able to achieve similar results with MongoDB as with Cassandra, with the exception of higher CPU utilization on the ETTC 2015– European Test & Telemetry Conference database cluster: approximately 30% of CPU was used on each of the replica set nodes. 3.7.2 Retrieval MongoDB provides a very powerful query language that provides both key lookup capabilities as well as very flexible range queries. That, combined with a flexible indexing strategy led us to design a data structure that minimized the list of documents containing irrelevant information when performing time based queries. This ended up getting us in trouble as for a very large document set (100k +) range queries were not as optimized as we would have thought resulting in very long retrieval times: it took more than one minute for the initial query to even return any documents. Simplifying our data layout resulted retrieval rates up to 300k samples per second. 3.8 Data consistency considerations Traditional relational database servers put a lot of emphasis on data consistency. Constructs such as constraints, locks, and above all transactions allow the database designer to provide a high level of data consistency at the database level. Most of those constructs exist in NoSQL database in a very basic form (document level atomicity, lightweight transactions). It is therefore up to the application layer to guarantee data consistency, which pose a new set of challenges on the application designers. 4. Recommendations Because there is no one-size-fits-all with NoSQL, our recommendation to anyone interested in pursuing a NoSQL solution would be to clearly define the application, use case and query patterns while allocating enough time for hands-on experimentation with a few different NoSQL approaches. It is very important to try to identify the set of use cases early in the project: while changing data structures is normally fairly straightforward, changes can get quite time consuming when dealing with large amounts of data. 5. Conclusion Modern NoSQL servers allow application providers to quickly and inexpensively create flexible and scalable data storage solutions that would require a very large investment using traditional database servers. Data storage and access layers allow the users and developers to efficiently model their data and to gain quick access to it. In the telemetry world, these new technologies should shift the focus from mere storage, to value-added data solutions: statistical analysis, predictive analytics, data visualization. The road to better telemetry data utilization is only just beginning. 6. References [1] Wikipedia: Big data [2] Apache HBase documentation [3] DataStax Cassandra documentation 7. Glossary DB: Database RDMS: Relational Database Management System SQL: Structured Query Language NoSQL: Not-Only SQL CQL: Cassandra Query Language

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015 – European Test & Telemetry Conference Case Study: Proposal of Architecture for Big Data Adoption A. Msc. Luiz Eduardo Guarino de Vasconcelos, B. Eng. André Yoshimi Kusumoto, C. Dr. Nelson Paiva Oliveira Leite, D. Dra. Cristina Moniz Araújo Lopes Instituto de Pesquisas e Ensaios em Voo (IPEV) {A, B, C} Instituto Tecnológico de Aeronáutica (ITA) {A, B, D} Pça Mal. Eduardo Gomes nº 50, Vila das Acácias, São José dos Campos, SP 12.228-904 – Brazil Abstract - This paper describes the proposed architecture for big data that is experimentally adopted by the Instituto de Pesquisas e Ensaios em Voo (“Flight Test Research Institute” - IPEV). The current technology simplifies the creation of a large number of information. As example a typical Flight Test data set encompasses: streaming data from instrumentation sensors, weather related measurement, video images from multiple cameras and test personnel voice communications. In this application the Apache Hadoop was selected as the ecosystem used for big data implementation. The Apache Hadoop open source distribution includes several tools that simplify system implementation. The proposed architecture and its technology tradeoffs are discussed. Keywords: Flight Test, Big Data, Hadoop, Cloud computing. I. Introduction Big Data concept is built around 3V concept (i.e. Increasing Volume, Velocity, and Variety)[1]. This concept can innovate the way of treatment of cost-benefit information, providing a better insight into the decision- making process. Nowadays Big Data is part of every sector and function of the global economy [2]. The definition of Big Data concept changes by industry sector. It depends on what kinds of software tools are available and what sizes of datasets are used in a particular industry. Big data refers to the dramatic increase in the amount and rate of data that is created and available for analysis. The main reason of this trend is the ever-increasing digitalization of information. Moreover, current technology simplifies the creation of a large number of information, based on the behavior of a company or individual. As a general trend, the number and types of acquisition devices and other data generation mechanisms are growing all the time. In Flight Tests, big data sources encompasses: streaming data from instrumentation sensors, weather related measurement, video images from multiple cameras, test personnel voice communications, air safety reports, calibration and uncertainty data, and simulation estimations. Big data sets from these sources can contain gigabytes or terabytes of data, and may grow several megabytes or gigabytes per day. Moreover, flight tests ranges normally have a customized Enterprise Resources Planning (ERP) system to support its business environment. Thus, different from structured databases, where all the data are inserted in an organized form across columns and rows, the Big Data tools also works with semi-structured (e.g. eXtensible Markup Language - XML) and unstructured (e.g. data files, videos, documents and texts) data. Big data provides a new paradigm for data analysis providing a better solution for the decision-making process, however, it also presents a number of challenges to be overcome, such as:  Big data sets may not fit into available memory space;  It may take too long for processing;  The stream could be too fast for storage.  Standard algorithms are usually not designed for big data sets processing in a reasonable amount of time or memory. So, there is no single approach to big data. In this context, IPEV experimentally defined an architecture for Big Data adoption to analyze the large volume of test flight data collected from different sources. The proposed design uses the Apache Hadoop environment tools based on Cloudera Cloud. The Apache Hadoop software library is a framework that allows distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each one providing local computation and storage. Rather than rely on hardware to deliver high- availability, the library itself is designed to detect and handle failures at the application layer, so it delivers a highly available service on top of a cluster of computers, independently of local failures. The architecture operates as follows: 1. All data source are stored in Hadoop Distributed File System (HDFS); ETTC 2015 – European Test & Telemetry Conference 2. The structured IPEV data source was represented by MySQL relational database. As the Hadoop only works with unstructured data, the challenge was to implement a real-time automatic data replication, from relational architecture (i.e. MySQL database) to the unstructured data, represented by HDFS; and 3. To automatic generate the charts to allow the flight test engineer to handle most flight test data in a single environment. II. Background This section defines the main concepts used on this paper. A. Big Data Big Data are large pools of data that can be captured, merged, stored, and analysed. Data has been growing at exponential rates, but its growing rate is larger than the current evolution of read and write cycle rate for storage disks. With legacy technologies, whereas it is required to move all data stored into a one terabyte disk, the resulting process efficiency is very poor [3]. B. Hadoop Hadoop is a open source software framework which allows distributed processing of large amounts of data sets using cluster computing. It was developed and currently maintained under Apache Hadoop Project [4]. Other Hadoop-related projects have been successfully developed. As examples of well-known Hadoop related projects are: 1. Hadoop Distributed File System (HDFS); 2. Map Reduce, and 3. Hive Figure 1 depicts the Apache Haddop ecosystem. Figure 1 – Apache Hadoop Ecosystem C. Hadoop Distributed File System HDFS is a distributed file system for storage and streaming access of large data sets on a machine cluster. File are divided into blocks (64 MB by Default) and distributed across the cluster [5]. There are two types of nodes on the HDFS. The namenode and the datanode. The namenode handles the file system namespace and the file system tree with metadata about the files and its directories. It also keeps track of locations of the blocks of the files on the cluster. If namenode is lost in a cluster, the file on the system would be lost, because there is no way to locate them. The datanodes are responsible to store and retrieve the blocks of files. Periodically they also inform the namenode of the stores blocks. In HDFS blocks of file can be replicated (by default 3 times) on other nodes. This scheme provides a fault-tolerant solution so file blocks can be retrieved from other replicas. D. Hadoop MapReduce It is a distributed data processing model and execution environment that runs on a machine cluster. The processing is divided into two phases. The map phase and the reduce phase. Each phase has its own key-value pairs. The purpose of the map phase is to organize the data in preparation for the processing done in the reduce phase. The input to the map function is in the form of key-value pairs, even though the input to a MapReduce program is a file or file(s). By default, the value is a data record and the key is generally the offset of the data record from the beginning of the data file. The output consists of a collection of key-value pairs which are input for the reduce function. The content of the key-value pairs depends on the specific implementation. Each reduce function processes the intermediate values for a particular key generated by the map function and generates the output. Essentially there is a one-one mapping between keys and reducers. The number of reducers is decided by the developer. By default, the number of reducers is 1. The input data is loaded into map tasks to be processed in parallel. Its resulting output is input to reduce tasks. With such scheme data is processed in a distributed form on several cluster nodes. The cluster has a jobtracker and many tasktrackers. Every job given to the Hadoop cluster is scheduled by jobtracker node on the tasktracker nodes. Such operation is similar to a master slave relationship. The programmer needs to specify the map and reduce functions [6]. E. HIVE Hive is a distributed data warehouse for Hadoop. It was built at facebook based on the requirement for a similar Structure Query Language (SQL) language to work with big data on Hadoop. SQL applications are commonly used by the industry. So the use of a similar language provides an user-friendly environment for programmers and it simplifies the learning process of complex MapReduce programs. Hive provides its own query language known as HiveQL which is also similar to SQL so it helps to manage data present in HDFS [7]. F. Sqoop Typically valuable data of an organization is stored in relational database systems (RDBMS). In this case, the complexity to retrieve information could jeopardize the efficiency of the decision-making process. The usage of ETTC 2015 – European Test & Telemetry Conference Sqoop provides an efficient data exchange mechanism between the RDBMS and Hadoop. Such tool uses the map reduce application to provide the desired information for the other databases. G. Cloudera It was also necessary to select the Hadoop distribution kit that supports the required tools and technologies presented within the initial architecture. For this, the free distributions of IBM BigInsights [8], Cloudera [9], and Pentaho Big Data [10] were evaluated. Cloudera was selected, mostly due to its well- documented site and the possibility for downloading an already pre-configured virtual machine with Hadoop environment and its tools. This feature allowed the execution of several tests in a local environment. III. The Proposed Architecture A. Information Lifecycle Management Information Lifecycle Management (ILM) is a lowest cost process used for managing information through its lifecycle, from conception until disposal, in a manner that optimizes storage and access. ILM is not just hardware or software, it includes processes and policies to manage the information. It is designed upon the recognition that different types of information can have different importance at different points in their lifecycle. Predicting storage requirements and its associated costs can be very challenging as the business grows. The storage devices can vary with technology and costs. As example, let’s imagine the storage as a pyramid. Atop of such pyramid there is the high-performance and expensive storage (e.g. Fast disks), in the middle the medium performance storage (e.g. Optical disks), and, at the bottom the low-performance and cost but high- capacity storage systems (e.g. Tape). Figure 2 illustrates the proposed storage environment:  Storage devices are random access media. Such devices are best suited for storing frequently accessed data. The disk of the storage allows multiple parallel data read/write cycles.  Tape is an economical high-capacity sequential access media, mostly used for disastrous recovery purposes. It best suited for large files so data streaming capabilities of tape drive technology can be properly exploited. Figure 2 - Online, near-line, and off-line storage In general, the importance of stored information has been increased over the years. However, when created different data sets have its own relevance for the business and such relevance changes over the years. Such change is known as data lifecycle. Figure 3, presents a typical data lifecycle. Online storage is best suitable to store frequently-used or fresh data. For IPEV, the online file usage period should not exceed 18 months. Near-Line storage is adequate to store not so fresh and/or not so widely used data. For IPEV, the Near-Line file usage period should be between 18 months and 36 months. Over 36 months, data should be available only in the Offline Storage. Note.: The specific environment of experimental flight test, where acquired data is used for aircraft development and certification, requires the implementation of a very reliable offline storage system, where flight test data should be successfully retrieved to assist accident investigation. Figure 3 - Data value changes over time [12] In some special condition (e.g. Aeronautical accident investigation) data which are not frequently accessed or considered inactive can suddenly become valuable again. Historically, the requirement to retain information has resulted in a “buy more storage” mentality. However, this approach has only served to increase overall storage management costs and complexity, and has increased the demand for hard-to-find qualified personnel. B. Relational Database Management System In this application the MySQL is used for storage software as relational database management system (RDBMS). MySQL is deployed in 9 of the top 10 most busiest sites on the web including Facebook, Twitter, eBay and YouTube, as well as some of the fastest information growing sites such as Tumblr, Pinterest and box.com [13]. There are multiple architectures that can be used to achieve highly available database services. Each one is differentiated by the uptime availability levels. These architectures can be grouped into three main categories: 1. Data Replication; 2. Clustered & Virtualized Systems; and ETTC 2015 – European Test & Telemetry Conference 3. Shared-Nothing, Geographically-Replicated Clusters. For this application it was selected the second category. Figure 4 shows the recommended topology to support the IPEV current workload at. Figure 4 - Topology for Medium Web Reference Architecture This topology distributes the core functions of Session Management, Web Applications, Content Management and Analytics across their own server and storage infrastructures, enabling individual deployment, management and scaling. This approach provides a simpler evolution and manageability of the architecture as the workload evolves and grows beyond the initial design and business expectations. To size the required number of running instances of MySQL as function of the number of concurrent application servers, a good rule-of-thumb is that each MySQL server could support up to 8 application servers. In a read-intensive environment, the addition of more MySQL slaves jeopardizes application server scalability. PHP programs require more application servers as compared with Java and C# software. Independently replicated MySQL Master/Slave servers for each of the core functions provides more flexibility and control over MySQL infrastructure for developers. Enterprise Applications workload uses the default InnoDB storage engine to provide transactional support and crash recovery. There are two ways for delivering high availability: 1. Use of Linux Heartbeat with semi synchronous MySQL replication; or 2. Use of OS-based solutions like Distributed Replicated Block Device (DRBD), along with MySQL Enterprise Backup. Linux Heartbeat [17] implements a heartbeat-protocol that sends messages at regular intervals between two or more nodes. If an acknowledgment message is not received from a node within a given interval, then it is assumed that the receiving node has failed. In this case the cluster resource manager initiates a failover action. In the event of a failure, the resources of the failed host are disabled, and the resources of the replacement host are enabled. Furthermore, the Virtual IP (VIP) address for the cluster is redirected to the new host. DRBD [18] is an open source Linux kernel block device, which leverages synchronous replication to mirror data between two systems that works in an Active/Passive configuration. For data mining and business intelligence applications, session and web data are captured in an analytics database that executes off-line report generation. As a real-time database designed for 99.999% availability, MySQL Cluster could also replace existing heatbeating mechanisms and the Memcached layer of the RDBMS. Scalability of the content management application is critical as this is a core part of the web service. MySQL Replication is used to deliver read scalability to each MySQL master that is typically attached to 20 – 30 slaves. In a regular content management workload, each slave should be able to support up to 3,000 concurrent applications. Such performance exceeds a typical test range requirement such as IPEV’s. Distributed file systems such as MogileFS are often used to index the physical assets of the content management system. Metadata for each asset are stored within InnoDB tables that are managed by MySQL. The content assets (e.g. Images, videos and documents) are not physically stored into the database, but rather within the file system. For physical storage, the content assets can be stored either on a Storage Area Network (SAN) or distributed across local storage devices attached to each server. As a SAN can be a potential Single Point of Failure (SPOF), it is recommended to use mid-range or high-end products for this device, because such equipment classes usually include mechanisms to deliver High Availability. It should be highlighted that the use of Network Attached Storage (NAS) or Network File Systems (NFS) is not recommended. If the assets distributed across local storage devices, it is important to ensure that the appropriate mechanisms for high availability of indexing and to support metadata are implemented. A solution commonly deployed to address such requirements is Linux Heartbeat with semi- synchronous MySQL replication, or OS-based solutions like DRBD. C. NoSQL MySQL shall not be used for everything. Apache Cassandra is an open source system designed to manage large volumes of data in real-time, enabling immediate response and providing a fail-safe solution. The Cassandra system acts as a distributed database and it could be used with non-relational database, such as NoSQL (not only SQL). Cassandra [14] is used for high rate writes, and low rate reads. The main advantages are: ETTC 2015 – European Test & Telemetry Conference 1. The fact that Cassandra can run on cheaper hardware than MySQL, 2. Simple expandability; and 3. Its schema less design. Furthermore, the SQL Vertica application is used for analytics and large aggregations and joins. In this case both Cassandra and Vertica don't have to write MapReduce jobs. In this particular application, all flight test data, such as images, videos, tests reports, and post-mission results are stored in Cassandra. D. Integration RDBMS, NoSQL and Hadoop The challenge was explore the possibility of automatic data replication, in real-time, from relational architecture, represented by MySQL, to the unstructured data, represented by HDFS. This challenge was addressed with the use of a tool called MySQL Applier for Hadoop (Happlier), which injects data from MySQL to Hadoop, in real-time [15]. The tool reads MySQL binary log through its Binary Log Application Program Interface (API) and creates corresponding entries for databases and tables as Comma Separated Values (CSV) files in HDFS. This process is completely transparent to the user application. Considering NoSQL, the Cassandra supports executions of Hadoop MapReduce tasks. The tasks of MapReduce can search data inside of Cassandra and return data to Cassandra or in file system. MapReduce and other tools run with Cassandra non-distributed trial version. However, to run in a production environment it’s required to install Hadoop on Cassandra cluster. Cassandra's Hadoop implements the same interface as HDFS to achieve input data locality. E. The Architecture The IPEV proposed architecture is presented in Figure 5. This architecture provides a software development structure currently adopted by IPEV. It also shows how structured and unstructured data are stored and analyzed. The software development cycle is divided into two parts:  Development environment; and  Operation environment. In the development environment, where new applications are developed or existing software is updated, the applications are controlled by Control Version System (CVS) (e.g. SVN or Github [11]). In this environment we uses duplicate instances of the database (i.e. RDBMS). These applications are submitted and evaluated with the following test procedures:  Unit;  Acceptance;  Load; and  Stress. After verification and validation of an application, they are distributed to the operational environment, called Enterprise Applications. These applications access the relational database structure (Figure 4). All enterprise applications store data in MySQL. The transactions were first maintenned on a relational database (e.g. MySQL) and then instantly replicated to the HDFS in CSV format via MySQL Hadoop Applier (i.e. Happlier). For further analysis, the HIVE was used and the CSV is remapped as databases and tables equivalent to the relational model. This allows data analysis using Hadoop. Figure 5 - IPEV Big Data Architecture, The Enterprise Applications are developed under Java with Spring Framework application or Microsoft Asp.net in C#. In case of Java, Maven is used to manage the application lifecycle and to control its dependencies. All applications use a persistent framework for data storage (e.g. Hibernate or Entity Frameworks). The Model View Controller (MVC) architecture and RESTful API are also used to allow an easy integration process among all applications. During the flight test campaigns, many applications are developed (e.g. Matlab Script Applications), reports are produced and several images and videos are generated. Such information and related files are unstructured and thus stored in Cassandra, in NoSQL environment. After that, the data is sent to the Hadoop. With all information in bigdata environment, the next step is to do data analysis. For this, the application also allows the execution of user-transparent data analysis in the big data environment. Furthermore, automatic generation of data charts helps the Flight Test Technical Staff to produce its final test reports. This goal was achieved integrating the existing architecture with R System [16] through operating system calls. In that way, our application sends a message with the required parameters to R. Then such application generates the required results, forwarding it back to our application that is responsible for real-time rendering. For statistical analysis, the R tool is a good option; however, it works as standalone tool. The intention was to add the analysis with R scripts within the web application (Java/C#). To make this possible, the web application executes Scripts R accessing data via IMPALA ETTC 2015 – European Test & Telemetry Conference application. Then it presents graphical data and outlines to the Technical Staff. Impala enables access to data in HDFS with better performance than Hive (Figure 6). The improvement in performance occurs because Impala doesn’t requires the execution of a job map reduce operation through Hive queries, since R Scripts applications handles itself the data. Figure 6 - Performance Analysis Impala x HIVE [9] Other available options for data analysis include Artificial Neural Networks and Data Mining technologies. IV. Conclusion This paper proposes the architecture for big data adoption in Flight Test business. It also describes various resources that should be used in the project. The proposed solution provides a robust and scalable architecture to be used by IPEV for software development and information storage. Moreover, it is based on the use of open source tools that is already proved an efficient solution for large-scale systems. The main benefit of the architecture is the gain of competitive advantage. The big data architecture allows the combination of different data types for the execution of a more comprehensive Flight Test data analysis. Such system can be operated and managed at scale. The proposed architecture deals with different technologies. Future works should: 1. Verify the integration of Pig and Jaql technologies to the scope of the project; 2. Explore the use of specific tools that automate testing for the Big Data environment (e.g MRunit for Map Reduce Tests). 3. Explore and verify new tools that could improve the application safety. Acknowledgement We wish to thank the unconditional support given by the Instituto de Pesquisas e Ensaios em Voo (IPEV) and Instituto Tecnológico de Aeronáutica (ITA). Also we like to thank FINEP under agreement 01.12.0518.00 that funded the development of this proposed architecture and the presentation trip. References [1] Gartner. Published in STANFORD, Conn., June 27, 2011. Available at: http://www.gartner.com/newsroom/id/1731916. Last access: 20/04/2015. [2] Shayib, Mohammed A. Applied Statistics, 1st Edition. 1st Edition. 2013. [3] White, T. Hadoop – The definitive guide. Third edition [S.l]: O’Relly Media, Inc., 2012. [4] Apache Haddop. Welcome to Apache™ Hadoop®. The Apache Software Foundation. 2014. [5] Borthakur, Dhruba. Hadoop Distributed File System Architecture Guide. The Apache Software Foundation. 2008. [6] Dean, Jeffrey; Ghemawat, Sanjay. MapReduce: Simplified Data Processing on Large Clusters. Google Inc. OSDI. 2004. [7] Capriolo, E.; Wampler, D.; Rutherglen, J. Programminh HIVE. [S.l.]: Sebastopol: O’Reilly Media, 2012. 328p. [8] IBM. Why IBM for Hadoop? Available at: http://www- 01.ibm.com/software/data/infosphere/biginsights/. Last Access in: 20/04/2015. [9] Cloudera. Available at: cloudera.com/content/cloudera/en/home.html. Last access: 20/04/2015. [10] Pentaho. Available at: pentaho.com/product/big-data-analytics. Last access: 20/04/2015. [11] Github. URL: https://github.com/. Last access: 20/04/2015. [12] ESG. Enterprise Strategy Group. Available at: http://www.esg- global.com/research-reports/2015-it-spending-intentions-survey/. Last access: 20/04/2015. [13] MySQL. MySQL Reference Architectures for Massively Scalable Web Infrastructure. Available at: https://www.mysql.com/why- mysql/white-papers/mysql-reference-architectures-for-scalable- web-infrastructure/ Last access: 20/04/2015. [14] Hewitt, Eben. Cassandra: The Definitive Guide – O’Reilly Media. November, 2010. [15] MySQL Applier for Hadoop. Available at: http://dev.mysql.com/tech-resources/articles/mysql-hadoop- applier.html. Last access: 20/04/2015. [16] Emmanuel Paradis, R for Beginners. Institut des Sciences de I'Évolution. Université Montpellier II. 2005. [17] Robertson, Alan; The Evolution of The Linux-HA project. Linux Technology Center. International Business Machines Corporation. 2006. [18] Ellenberg, Lars; DRBD® 9 & Device-Mapper. Linux® Block Level Storage Replication. Proceedings of Linux-Kongress 2008 October 7-10, 2008, Hamburg, Germany.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference How big data brings added value and agility during the flight test campaign ? A. Laurent PELTIERS, B. Jean-Marc PRANGERE AIRBUS operations SAS, 316 route de Bayonne - Toulouse Abstract: Testing aircraft and airborne systems in flight require huge amount of data stored in several databases: data, video, configuration. The Big Data technology allow us to revisit the way we store, process and use the data to increase efficiency in development : shorter lead time whilst increasing maturity. Keywords: Big Data / Flight Test / Systems testing 1. Introduction In the frame of incremental development of Airbus aircraft (like New Engine options), the flight test campaign is reduced to 8 months to ensure quick entry into service whilst aiming at maturity and keeping high industrial ramp up. Big Data Technology (storage, data access and tools) put in place at Airbus Flight test Centre since April 2015 for Single Aisle Neo offers the testers a new way of working. Breakthrough come from the immediate data access to a whole flight test campaign : this brings new capabilities for data retrieval and comparison, early detection of anomalies including recognition of patterns, tests analysis by learning capabilities, correlation between parameters, prognosis. 1st part: The hardware and software solution 2. Current way of working 2.1 Challenging context From the A320 in the 80’s, the AIRBUS aircrafts became more and more complex and during the same period, the flight test installation (FTI) became more and more accurate. Thus, from 1987 when we recorded 12,000 parameters in a PCM format with an implicit dating, we recorded during the latest A350 campaign up to 650,000 parameters in IENA format with an explicit dating. These facts lead to a huge increasing of data collected during a flight test campaign. In an other hand, the incremental development approach leads the test analysts to compare the data coming from different test campaign and then to request 10 to 20 years old records. 2.2. Sequential approach During the same time period, the way of working did not really change. The main evolution comes from the recorders’ technology used on-board: Previously the recorders were based on tapes (from the AMPEX 14 tracks recorder to the SONY AIT), now the recorders use removable hard disk (from SCSI hard disk to SATA solid state memory disk). Nevertheless, the post processing do not change and is still sequential: The data are processed test after test. The different steps of the post-processing are:  Copy the raw files from removable hard disk to a centralize storage facility  Make a secured copy of the original data in a tape storage library because the on-line storage capacity can only keep 3 months on-line  Convert to engineering unit. Nowadays, this step is done on user request for a set of parameters on a specific time period. This task can take a long time if the original data are no more on-line.  Analyse the extracted data with different applications. ETTC 2015– European Test & Telemetry Conference The typical post-processing applications are:  Complex event detection  Processing of computing parameters which can not be done in real-time  Rendering in “excel format” or in graphical way  Storing the test results in database for a future Mainly “read only” Post-processing (Event detection, protocol checking…) Primary data Computed data Databases with consolidated data Archiving (up to 20 years) Online storage (at least 3 months) Testers EV subcontractors Analysis tools Capacity ~200 TB ~300 users internal ~300 users external Up to 2,5 TB per day 30 000 batches per month 6000 sensors + Avionic buses + Video + log Storage ~850 TB Dataflow for the A350 campaign The IT infrastructure to process the data have been enhanced to follow the evolution of the volume collected but no real breakthrough has been done. The processing is still done on LINUX servers linked to NAS file system. The offline data are managed by LTO tapes libraries. Then some bottlenecks appear years after years:  The volume of offline data is continuously increasing then users can wait a long time before getting their data  The processing test by test is not efficient to compare different flights or different version of the same aircraft and will become a real difficulty for the incremental development  The flight campaigns are even shorter and involve more and more aircrafts so the number of tests receive each day are increasing  In average 10% of the recorded signals are processed for each flight. To enhance the aircraft maturity at the entry into service, each flight hour must be used. Usage of the data vs the age of the test Regarding our current limitations and the expected volumes of data in the next years, the flight and integration center decides to launch a big data project to process the data of the NEO campaign. 3. What is “Big data” ? This technology comes from the main actors of the internet like Yahoo, Google and Facebook. To provide always more efficient services, they implement solutions which are the basements of what is commonly called “big data” Big data is based on 3 pillars  volume (Terabytes, petabytes...)  velocity (up to real time)  variety (structured and unstructured multiple sources) The way the data is managed is also different. It is designed around 4 main steps: 1. Collect the maximum of data even if they are not structured (typically free text) 2. Clean-up the data and organize them in an efficient storage system (could be a database or a distributed file system) 3. Analyze these data which a large set of algorithm and especially with statistical algorithms (clustering, large dimension regression, predictive analysis...) 4. Decide using Business Intelligence (BI) tools to drill into the huge volume of data collected and produced. 4. The project “Big data” in the integration and test center 4.1. The project organization This project has been led closely with the IT teams. Both functional architects and IT specialists were involved. This organization was the key for the success in such a short time - 1 year – The business gave the use cases and was in charge of the software, the IT was in charge of the deployment of the solution (hardware and middleware) within the AIRBUS policy and to put in place the run mode. ETTC 2015– European Test & Telemetry Conference The first step of the project was a call of tenders with 4 main actors of the market. To evaluate the maturity of the solution, we provided a large dataset to each tender and they gave us back a mock-up of a possible architecture. During this phase we concluded that the technology is mature and can bring us a real help to reach our objectives about performance and perspective of new processings. The studies also demonstrate that the integration in the current environment is quite easy. After the final tender selection the project was led in a classical way but with a strong investment of the teams to deal with the standard AIRBUS rules. The most challenging part was about the infrastructure selected: it is an engineering system which includes all the hardware parts and the software. All along the project the editor selected provided us a support and an expertise on these new technologies. 4.2. The technical choices Engineering system: it is a cabinet containing all the parts: servers, internal infiniband switch and gateway switch to connect to a standard ethernet network. All the elements are pre- installed and configured. The software of the solution:  FLUME to inject the data as streaming  AVRO to serialize our data as objects  noSQL database: key/value database which contain all the raw data (AVRO objects)  HADOOP cluster: a set of servers with a middleware to manage distributed processing. We will use SPARK to make our processing Apart noSQL, all the elements selected are opensource products managed by the APACHE foundation. 4.3. The main issues Integration in the IT organization: a system packaged with all the hardware and software integrated is difficult to operate in an organization structured around specialities like network or servers or operating system Time series: Our data are mainly huge time series. These objects are not natively handled by big data solutions. We need to design a in-house solution to efficiently manage our data Bursts: the “big data” solutions are usually used to get a continuous stream of data. In our case the data are received 2 times a day at the return of the flights 4.4. The next steps The solution is in place and all the flight data of the NEO campaign are available in the new solution. We have connected the legacy tools on the new solution, now it is the time for a more advanced usage. We are currently studying two things:  The machine learning algorithms (clustering supervised or not supervised, large scale regression, predictive analysis…). Our study is based on the R language. Many tools and libraries are available in the open-source world Screenshot from R studio  The BI tools to get an easy way to drill into and to display the big amount of data. The goal of such a tool is to have an open window on the big data Screenshot of a Business Intelligence tool ETTC 2015– European Test & Telemetry Conference 2nd part: Application according to user needs 5. Data Retrieval 5.1 “As is” The search of test conditions already flown is a frequent task to make comparisons between flights but also to prevent from re-flying if similar test has already been done. Today, the search can be made in the flight test database for a specific test request or purpose but there is no link to the numerical data; as a result, local databases exist to store relevant test cases with numerical data for further retrieval, analysis and certification cards production. These databases are static, defined in advance and populated mainly with specific flight. Hence the search for a given flight condition on test campaign can take time and once the potential test case matching the conditions have been found, a complementary analysis is needed (generally by plotting parameters). 5.2 “To be” Big data access will allow to remove this limitation and to offer a way to directly send a search request in a meaningful way to find a given test condition such as : find all tests cases flown above FL300, above M0.8, in turn (bank angle above 5°), with autopilot engaged and plot the vertical acceleration. It will also make the bridge between several databases (aircraft configuration, Flight Test data, Test results database) to widen the criteria and be more selective. This comparison will be eased with big data by including the comparison criterion downstream the search criteria like: compare bank angle in automatic precision approach below 200 ft between one version and another version. Once the tests have been found, we can use the real flight tests data in desktop simulation to validate a design modification very early before implementation in avionic software and thus reduce development time (it is useful for new functions coming from latest aircraft program applied on “older” programs). It can save lab and flight tests. This simple case highlights how we can make a wider use of our data (exhaustiveness), but also much simpler and faster, and save flight hours. 6. Early and continuous detection of “anomalies” A usual flight test campaign lasts between 2000h-3000h whereas an aircraft will fly 6000 hours per year in service. To reach maturity there is a strong need to detect during Flight Test Campaign all misbehaviours that can happen and take benefit of all the flights. 6.1 “As is” Today, algorithms of detection of events (called surveillance) are defined in advance and run on every flight. If the system design is modified during Flight test it should be interesting to add new algorithms or thresholds and to re-launch surveillance, which is very difficult on past flights (data are not on line). 6.2 “To be” Immediate access to data allow testers and designers to implement new surveillance during the flight test campaign and to re-launch on existing data. It shortens the validation cycle of the modification, saves data reduction time. Surveillance can be classically defined by explicit algorithm but big data can bring an additional value with a statistical analysis like pattern recognition. As a matter of fact, the type of abnormality cannot be specified in advance as it may be very wide. (noises on sensors, flight dynamic parameters fluctuations like oscillations, jerks on load factors…) We can use big data technics (machine learning and statistical tool) to analyse the signals detect unusual patterns pointing out cases where system specification or control law tuning might be needed. 7. Flight test analysis Big data and associated tools provide a way to enhance the test analysis. 7.1 Use of learning capability and clustering Big data will also help the tester to make the trouble shooting the events encountered in flight by providing a global view of parameters whose value or status has changed in a given time window of the event. It will ease the analysis where parameters are linked but not directly from a mathematical formula. Correlation between events generated by surveillance can be done in order to filter cases that are as per design and other that are unexpected. Big data could also help testers in flight test analysis by identifying the parameters that are beyond the usual acceptable envelope : we feed the Big Data System with a subset of parameters that are relevant of the autopilot system performed. Algorithms are developed to sort test into clusters and learn on several cases what criteria have to be met. Certification tests can also be clustered to build a reference grid that will help to support in service or acceptance event where finding are reported : it will tell us whether the reported behaviour is within or beyond the certification envelope, and save time of analysis. 7.2 Statistical analysis Analysis of recurring events can be eased by correlation analysis in order to point out the main components for ETTC 2015– European Test & Telemetry Conference example, we encountered some longitudinal oscillation in cruise on one aircraft and not on the others with same computers software. Root cause analysis highlighted that the loading case was slightly different in term of fuel repartition in the tanks. Correlation between parameters does not mean that root cause is addressed but we might be able to detect the contributing factor : inertia, speed, flight level, 7.3 Prognosis From the former applications of statistical analysis on existing data, we could also detect and predict sensors failures (drift on parameters, noise,) and launch investigation replacement prior the test. 8. Conclusion The Big Data technology is a disruptive innovation for improving test efficiency through immediate access to the data of a whole Flight Test Campaign. This allow to search, filter and compare data for given flight conditions and thus prevent from re-doing a test that has been done. Moreover, the recorded data can be not only used to detect issues encountered in flight but also to check in simulation that a fix is effective, prior release of the next computer standard, and thus increase maturity and reducing lead time. Data analysis capacity is also increased by learning from data, clustering and statistical approach. Valuable information is stored in the data, it is up to the testing teams to find the best methods and tools to bring added value. 9. Acknowledgement The authors acknowledge the contribution of their colleagues to this work. The bigdata project team: Patrick Obin, Faten Korken, Pascal Claux, Vincent Gallinier, Nathalie Prosper, Adil Soubki, Walter Henon, Severine Poussou, Amine Taourchi and our subcontractors Benoît Garrigues and Vincent Herrero.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Big Analog Data - Extracting Business Value from Test & Telemetry Data - Otmar Foehner, Robert Lee and Olivier Daurelles - National Instruments - United States, United Kingdom, France Managing and extracting value from test and telemetry data collected using different tools and formats, stored in geographically dispersed locations, and managed by different groups within a company or even multiple companies and organizations, is a big challenge for every large engineering effort. The value of this data can only be harnessed by employing the proper tools and techniques to turn the data into information, and information into knowledge. Traditional approaches to the big data problem used in business, marketing and social media applications are not suitable for solving today and tomorrows big engineering challenges. This presentation will explore the unique challenges and potential approaches to solving Big Analog Data for Engineering applications, and demonstrate why solutions must consider and complement existing company big data efforts.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Improving Test Cell Efficiency by Monitoring Methods Aurélie Gouby, Alexandre Vandecasteele Snecma Rond-Point René Ravaud – Réau 77550 Moissy-Cramayel CEDEX – France aurelie.gouby@snecma.fr alexandre.vandecasteele@snecma.fr Abstract Aircraft engines are systematically tested throughout development phases to evaluate their performance. During a test, hundreds of measurements on engine and on the test cell itself are acquired and then stored in a dedicated database. These data are then used by the design office to validate control’s models, to evaluate the engine’s performance and so on. Several factors can cause an abnormal measure in the database: a sensor failure in the test cell, a bad sensors connection, a data transmission problem in the test cell or an incorrect unload in the database itself. These kinds of problems lead to non-quality in the data, subsequent delays and additional costs that everyone wants to avoid in the development phase of an engine. Prevent measurement errors is thus a priority during an engine test. In the following article, we present a measurement validation algorithm used in “nearly real time” mode, during the test. The algorithm consists in modeling the measurement by building a mathematical model or takes a known physical model. An anomaly Z-score is then computed with model’s residues and compared to an abnormality threshold learned on real data or given by an expert. The user will be alerted about the measurement value if the Z-score exceeds this abnormality threshold. Through this monitoring system, if a measurement problem occurs, the engine test owner can then be reactive and rerun the acquisition or change the sensor if faulty. Introduction Test cells are used for reception of each new production engines or for evaluation of new engine parts during development phases (Figure 1). This test procedure is very expensive because of the engine instrumentation and the bill can increase strongly if a sensor is faulty and prevents the nominal progress of the test. The goal is to reduce the risk of unexpected events and alerting at the earliest by ensuring the measurements are correctly acquired. Fig. 1: LEAP-1B installed in a test cell During the test, the engine and the test cell are monitored through the acquisition of hundreds of real- time measurements. It may be temperature, pressure, vibration sensors … The test cell system is able to record each sensor at different acquisition frequencies and presents graphs with alert bounds in real time (on engine operation). In the monitoring room, screens enable test engineers to visualize some important engine parameters like the rotor speed or different temperatures. However, all measurements cannot be controlled (in terms of validity) in real time. During the test, two types of data are stored in a dedicated database: • The transient data: this is low frequency (up to 100Hz) signals like performance measurement (pressures, temperatures, flows, gauges, ...) and high frequency dynamic measurements like tip timing, accelerometers, microphones (up to 200 kHz). • Steady state: that’s a point or a snapshot acquired when the engine and the conditions are stabilized. Fig. 2 : Deployment of PHM algorithm in test cell with a direct link to database Test Cell Database Test cell acquisition system Test cell Engine Measure s PHM system The PHM (Prognostic and Health Monitoring) system has access to the cell database (Figure 2); hence, it’s possible to analyze the data once stored in the database (the delay is a few minutes between the acquisition and the storage). Le diagnosis is made in near real time. Context of application The health-monitoring algorithms are developed by Snecma under the SAMANTA (Snecma Algorithm Maturation ANd Test Application) platform which was previously described in [1]. This environment industrializes blocs of mathematic processing tools in graphical units. Aeronautic engineers are able to use each mathematic module to build their own specific solutions. To facilitate monitoring analysis by the test team, a supervisor with the SAMANTA platform has been set up in the test cell. A direct link between the database and the platform has been created to perform the database analysis as quickly as possible. The algorithm described in this paper analyzes in a first instance the steady state stored in cell database. Before connecting the algorithm in test cell, it’s important to ensure that computation results have a minimum of reliability: one doesn’t want to interrupt an expensive test experiment scheduled with bad reasons. Hence, two performance indicators are defined: the PFA (Probability of False Alarm) and the POD (Probability Of Detection). It’s sometimes difficult to compute these indicators if there are no available abnormal labeled data (which is not always easy to get in test context). The PFA indicator is given by the following conditional probability: (1) if one writes the probability that an abnormality is detected by the algorithm and the probability that the measure is healthy. Then the false alarm rate is just the probability that the measure is healthy but that an abnormality is detected. This probability should not be confused with which is the first species statistic error that one needs to define the test rejection domain. The probability of detection is given by the following equation: (2) It’s usually called in statistics the “test power”. Obviously, the ideal monitoring method minimizes the rate of false alarms while maximizing the rate of correct detection. Monitoring algorithm The sensor monitoring algorithm developed for this project is divided into two phases: learning and execution. As explained in the previous chapter, monitored measurements are steady state taken during the test; therefore the temporal dimension cannot be used here. To overcome this problem, monitored measurement is expressed in terms of another "contextual" measurement as for example the fan speed for pressure. Obviously, the learning phase is not required if you know the physical relation that links the two variables. Fig. 3 : Algorithmic steps of monitoring algorithm learning phase This first step consists in modeling the cloud of points by a curve. It is assumed that one have observations. The measure to monitor, denoted , is estimated by an explanatory measure, denoted , with links that may be linear, polynomial, exponential, and logarithmic, according to the cloud shape. The choice of transformation is given by the engine experts who know the physical relationship between the parameters or is computed automatically with a penalty method like Lasso algorithm described in [2]. Thus we obtain: r (3) where is the estimator of the measure to monitor and the regression function. The regression residuals are given by the following formula: for (4) Of these residues, the mean is computed using the following formula: (5) Steady state Measure to monitor Contextual measure Creation of a regression model Residues Algorithm memory Mean of residues Model coefficients Model quality Fig. 4 : Steady state points scatter plot and the associated regression model (top subplot). Regression residuals according to fan speed (bottom suplot) Erreur ! Source du renvoi introuvable.Figure 4 bottom plot shows that there is dependence between residuals variance and the explanatory parameter in case of pressure example. The higher the value fan speed, the more the variance of the pressure measurement increases. This may be due to the sensor precision. Therefore, it’s important to take into account this specificity in the detection algorithm. Fig. 5 : Algorithmic steps of variance model computation in learning phase Once created the regression model, the residual variance is estimated on a sliding window of size (where is a parameter of the algorithm) using the following formula: for (6) After estimating the variance, the purpose is to express it as a function of fan speed. In the same way as before, one seeks the function such that: for (7) It is then possible to create a confidence tube around the curve by giving a threshold number of standard deviation . This tube takes into account the sensor measurement uncertainty and the data dispersion at high fan speed. The upper and lower envelopes of the tube after the following equations: (8) (9) Obviously, an expert may also give a tolerance in the unit of measurement. Fig. 6 : Modelisation of residual variance by polynomial regression (top subplot). Confidence tube (3 sigma) around steady state (bottom suplot). By default, two alarm thresholds are set in learning phase: a threshold of three standard deviations and a threshold of six standard deviations because in normal operation condition the probability to observe something outside is less than and the probability to observe data outside is lower than . However, these thresholds are configurable by test engineers and can be adjusted according to false alarm rate and desired detection rate. Residues Variance computation on a sliding window Confidence tube Algorithm memory Model coefficients Model quality Contextual measure Modeling the variance Fig. 7 : Algorithmic steps of monitoring algorithm running phase At runtime, new steady state are analyzed one by one. Is noted a new point which coordinates and are the values of this point. The first step is therefore to apply equation (1) to obtain an estimate of the measure value (denoted ) of the relevant point. (10) One can thus calculate the model residuals: (11) The next step consists in applying equation (5) to obtain an estimate of the variance for the associated value of contextual measure: (12) Once the measure and the variance were estimated, a Z-score can be calculated using the residues and the mean (computed during the learning phase): (13) Once the Z-score calculated, it’s compared to the anomaly threshold which has been used to define the confidence tube in learning phase. If the Z-score exceeds this threshold, then the point is outside the tube and is therefore considered anomalous because outside the normal model dispersion. Usage counters At the end of the developing tests, the engine is carefully removed, disassembled and the major parts are examined. Engineers then checks if the wear part is consistent with the use that has been made during the test. Throughout the test campaign, run time, the numbers of starts, the number of cycles, timebands for different parameters are counted “by hand” by the bench engineers based on the logbook. In order to automate tasks and reduce potential errors in calculations, one decided to calculate these usage counters from the test database. Unlike measurement monitoring, transient data (1 Hz frequency) will be used because of the need of the temporal aspect. On a test day, it’s easy to calculate the engine run time or the number of starts by setting conditions on the 3 following parameters:  Core Speed  Exhaust Gas Temperature  Fuel Master Lever In order to track engine cycles during development engine testing, the following cycle definition is to be implemented. The lower boundary is X% of fan speed (depend of engine family). The upper boundary is Y% of fan speed (depend of engine family). When an engine exceeds the upper band of fan speed, a cycle is counted. When the engine is deceled below the lower band, the counter is re-armed and the next excursion above the upper band will count another cycle. Fig. 8 : Representation of cycles Finally, for few parameters of the engine, the running time pass into different ranges of value is calculated: it’s the timebands. Then, split the engine operating range in almost forty intervals for the following measurements:  Physical Fan speed / Corrected Physical Fan Speed  Physical Core Speed / Corrected Physical Core Speed  Exhaust Gas Temperature  Average Thrust Upper Band Lower Band Time 1 cycle 2 cycles Current steady state Computing a measure estimator with contextual value Results Algorithm memory Contextual measure Scoring Measure to monitor Computing variance estimator with contextual value Residues Variance Test : |Z-score| > Mean of residues Coefficients of variance model Coefficients of pressure model Fanspeed  Fuel Flow  Oil Temperature  Etc… The usage counters are automatically calculated for each day of testing and cumulated throughout the campaign. Implementation and Results This tool is installed and tested in several test cells for CFM56 and LEAP engines. To facilitate the reading of the resultants in real time during the test, a graphical interface was developed with Matlab. Each measurement is represented by a colored square. At a glance, users able to evaluate which measure are healthy or abnormal for the current steady state. Fig. 8 : User interface with colored alarms If the user wants to understand why the tool alerted for a particular measurement, simply click on the corresponding colored square to plot the current point compared to the theoretical model and the confidence tube. After several tests of the tool during tests, the following resultants are obtained:  PFA (Probability of False Alarm) ~ 3%  POD (Probability of Detection) ~ 90 % Conclusions The monitoring algorithm presented in this paper is a really simple method to validate a measure (as a point). The analysis is done in almost real time and quickly helped the test engineers to react when an abnormal measure appears. Coupling the measurements monitoring and the counters help the test team to save time, efficiency and quality. This tool aims to be extended to the systematic monitoring of all measures (test cell and engine) and to be deployed in all Snecma test cells. References [1] J Lacaille, A Maturation Environment to Develop and Manage Health Monitoring Algorithms, 2009, PHM Society Conference, San Diego, CA [2] J Lacaille & E Côme, Sudden Change Detection in Turbofan Engine Behavior, 2011, 8 th International Conference on CMMFPT, Cardiff, UK [3] J Lacaille, Standardized Failure Signature for a Turbofan Engine, 2009, IEEE Aerospace Conference, Big Sky, MT [4]J Lacaille, V Gerez & R Zouari, 2010, An Adaptive Anomaly Detector used in Turbofan Test Cells, PHM Society Conference, Portland, OR [5]J Lacaille & V Gerez, 2011, Online Abnormality Diagnosis for real-time Implementation on Turbofan Engines and Test Cells, PHM Society Conference, Montreal, Canada [6] J Lacaille & V Gerez, 2012, A Batch Detection Algorithm Installed on a Test Bench, PHM Society Conference Fig. 9 : User interface with current steady state, theoretical model and confidence tube.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Processing Ethernet Flight Test Data with Open Source Tools – Paul Ferrill – Avionics Test and Analysis Corporation – United States. This presentation will discuss the techniques used to analyze Ethernet data captured using an IRIG 106 Chapter 10 standard recorder during a recent flight test program conducted at Edwards Air Force Base. A combination of open source tools such as Wireshark and the Python programming language were used to monitor and extract specific data values. Examples of coding techniques will be presented along with a discussion of using Wireshark for troubleshooting and data discovery.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Session 4 ICTS presentations in a zip file 
 
no preview

Archive: Session 4 ICTS presentations Length Date Time Name --------- ---------- ----- ---- 1225572 2015-06-13 13:03 4-1-2 GHNASSIA ICTS Region II Report at ETTC 2015A.pptx 0 2015-06-13 13:04 __MACOSX/ 363 2015-06-13 13:03 __MACOSX/._4-1-2 GHNASSIA ICTS Region II Report at ETTC 2015A.pptx 1229312 2015-06-13 13:03 4-2-1 MAYER ITC-15_ICTS Session_Region1_Report.ppt 363 2015-06-13 13:03 __MACOSX/._4-2-1 MAYER ITC-15_ICTS Session_Region1_Report.ppt 2347233 2015-06-13 13:03 4-2-2 FALGA ETTC2015ICTS.pptx 363 2015-06-13 13:03 __MACOSX/._4-2-2 FALGA ETTC2015ICTS.pptx --------- ------- 4803206 7 files

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

A new design to ground TM/TC communications during spacecraft launch campaign at Guiana Space Centre Nicolas HUGUES, CNES, STFO project manager , Michel THOMAS, Zodiac Data Systems, STFO product manager ------------------------------------------------------------------------------------------------------------------------------------------------------------- I) STFO objectives During a campaign on the Guiana Space Centre, a satellite preparation has the following main steps: - First checks - Fuelling operations - Encapsulation operations - Transfer to the launch pad - Countdown The satellite needs to be controlled from its EGSE (Electrical Ground Support Equipment), located in a EPCU (payload assembly rooms: S1A, S1B or S5C) during all these major operations in different areas (First checks in EPCU S1 or S5, Fuelling in EPCU S3 or S5, Encapsulation in BAF, Transfer and Countdown on the launch table …). These buildings cover the whole launch base including SOYUZ area in the North, meaning that TM/TC links have to be established in a large network. Two types of interfaces are proposed to a satellite project for TM/TC communications: - A real Radio Frequency interface to the satellite antenna - A baseband interface using umbilical connexions through the launcher Both are used by most of the satellite projects. At the origin, these links were established through real radio-frequency links. But at the end of the 90s, the system had to be improved: - More and more difficulties due to Guiana climate (RF attenuations due to the rain) - Many configurations to manage due to the increasing of satellite campaigns So it has been decided to replace radio-frequency links by optical links to avoid climate impact, and to develop large bandwidth optoelectronic equipments to reduce hardware configurations: STFO system was born. The system started with Ariane4 campaigns. II) STFO presentation a. STFO general architecture During a campaign, the TM/TC links established by the launch range for a satellite project have to be switched to different areas (fuelling building, encapsulation building, launch table …) following the satellite. To ease the configuration and to reduce link cut during switching, all the buildings are connected through optical fibres on a unique nodal optical point located in the CDL3 building. Figure 1: General Optical Architecture of STFO 2/9 During launcher transfer from BAF to launch pad, the satellite is still connected using a rolling out optical cable. Figure 2: DCO Wagon b. TM/TC RF links The RF STFO system is interfaced to the satellite antenna with either: - A RF horn installed on the launch table mast, facing the satellite antenna through a RF windows on the launcher fairing - A RF patch antenna inside the fairing, connected to ground by umbilical RF cables Due to the long RF cable distances in the mast, it is generally necessary to add amplifiers, especially in Ku and Ka band. Figure 3: STFO RF link Architecture The TM and TC link budgets specified at the satellite antenna interface are the followings: Figure 4: STFO RF link budget The frequency bands available on the system are the followings: Figure 5: STFO RF frequencies For a satellite project, two optical fibres are used: - One for TM transmission - One for TC transmission Figure 6: STFO RF rack c. TM/TC base band link The Baseband STFO system is interfaced to the satellite through a Check Out Terminal (part of satellite EGSE system): Figure 7: STFO Baseband link Architecture 3/9 STFO Baseband system fits to analog signals: Figure 8: STFO Baseband Analog specifications and digital signals: Figure 9: STFO Baseband Digital specifications The 4 TM and 4 TC channels are multiplexed on a unique optical fibre. Figure 10: STFO Baseband rack d. STFO limitations and evolutions From its installation at the end of the 90s for Ariane4, the system has been adapted to fit new needs: - Extension to Ariane5 launch area (BAF, Transfer process, Launch pad) - Extension to the EPCU S5 facilities - Modification of the optoelectronic equipment directly to fit Ku band - Adaptation to fit Ka band using extra equipment’s (RF converters, amplifiers …) - Extension to SOYUZ and VEGA new launch pads Today, the system presents some limitations and constraints: - The original RF architecture of the system designed for Ariane4 launch pad was very simple (an optoelectronic equipment on each side of the link), but had to be complexified to fit the increasing of frequency band (Ku, Ka), the reduction of link budgets, by adding extra equipments to frequency - convert and amplify the RF signals. These extra equipment are no more wideband equipment (specific to each frequency band) and need hardware configuration and tuning to fit satellite project - Despite these extra equipment, the RF link budget are still short, in particular in Ku and Ka band, due to long RF cable in the masts - The tuning of the system to fit outband frequencies of specific satellite project is difficult, needs some hardware procurements with long delivery time. - Some components of the equipment, in particular the optical ones, are very specific, and face obsolescence problems. For example the analog 18 GHz lasers are very expensive and have long delivery time. - Moreover, these lasers, being directly modulated by large dynamic RF signals, need accurate RF and optical power control. - Monitoring and control of the STFO system has to be adapted to actual technical and security rules 4/9 III) STFO new concept a. General new Architecture In the frame of the studies of the launch base evolution for future launchers, a research & technology has been performed with ZDS (cf. [1]) to analyze a new generation of STFO system to improve its performances and face the constraints of the current system: o to improve the RF link budget (integration, filtering …) o to face obsolescence (use of optical transmission Ethernet standards) o to make configuration easier (new M&C). Figure 11: New STFO synoptic diagram The main limitation of the link budget is due to the long RF cables inside the mast. At the first conception of Ariane5 launch pad, it was decided to install optoelectronic equipments in the lower part of the mast to avoid temperature and vibration problems, in particular for laser components. In the new concept, it is proposed to install the optoelectronic equipments on the top of the mast, closest as possible to the satellite antenna interfaces. This architecture will reduce RF cable interconnections, and no more extra equipments (amplifiers …) will be necessary far from the optoelectronic equipment, so drastically enhancing the link budget. Figure 12: STFO rack in the mast 5/9 A specific installation will be studied to fit to temperature and vibration environments. Figure 13: Typical installation The new rack will include an integrated RF part adapted to the satellite frequency band: - A RF switch matrix to interface the horns and fairing antennas - A diplexer module to separate TM and TC access - A wideband RF to L-Band converter including amplification and filtering using wideband YIG technology local oscillators and filters Figure 14: RF part Figure 15: YIG components examples To face the obsolescence of some optical components of the STFO equipment’s, it is proposed to study a major evolution of the system by replacing the analog optical transport by an RF digitalisation and transmission through Ethernet 10 Gbps, which is by nature an optical high data rate transport system, fully compatible with the current optical network of the Kourou Space Center. This will be done by using ZDS’ IFoIPTM technology. IFoIPTM is used for RF signal acquisition and channel selection and distribution in many satellite applications, such as carrier and signal analysis and monitoring, Geolocation and others. The hardware part is based on a powerful broadband acquisition board, available with 70 MHz, 140 MHz and L band signal acquisition plugs. Figure 16: IFoIPTM board This IFoIPTM board acts as a true “signal server” which delivers portions of spectrum on subscribers’ demand, as presented on figure 17; its synoptic diagram is presented on figure 18. . Figure 17: IFoIPTM typical use The baseband rack will also use this transmission through Ethernet. This digitalization has many advantages: - Digital IF filter directly adapted to TM/TC signals bandwidth, optimizing C/N ratio - Automatic Gain Control no more useful on optical link, only on acquired TM/TC signals - Optical equipments are on the shelf (Ethernet switches). Specificity of the STFO equipment’s is reduced - Possibility to optimize fiber optics by multiplexing channels (example : a group of satellites from the same project) - Inband M&C using the same network - Configuration of the equipment’s through the network, no hardware configuration - Gradual installation of the system in parallel of the current one - Potential evolutivity from a point to point to an open network architecture without manual optical switching 6/9 Finally, a new M&C system will be also proposed using standards like SNMP or other. The configuration will be much easier for the launch range operator, and the security will be improved by well-known solutions of the market. b. RF Digitalization aspect TM/TC Base band signals can be directly used. For TM/TC RF signal, a unique Intermediate Frequency (L-Band) conversion is used for all frequency bands before interfacing to an RF over IP card. Figure 18: Synoptic diagram of an IF over IP board equipped with a dual channel L-band plug The RF TM/TC channel(s) present in the 80 MHz bands is (are) numerically filtered, down converted and buffered, then finally sent on one or several (up to four) 1Gb Ethernet link. Data field UDP overhead 8 bytes IP overhead 20 bytes MAC overhead 14 bytes CRC 4 bytes UDP data field (1472 bytes) IP data field (1480 bytes) Ethernet data field (1500 bytes) I/Q data 1460 bytes N° 4 bytes Date 8 bytes Ethernet frame (1518 bytes) Figure 19: 1 Gb Ethernet frame The various RJ45 1 Gb links are concentrated by a 10 Gb Ethernet switch on a 10 Gb Ethernet optical interface (10GBase-ER standard) for transportation over the Space Center’s optical network. According to this standard, the maximum distance for the optical link between the two switches is 30 km and can reach 40km for specifically engineered links. The maximum distance between two sites of the Kourou Space Center is 29 km. Two 80 MHz band can be simultaneously processed by one board in a narrow band mode: : inside each 80MHz band, separate channels can be extracted through a filtering / decimation / DDC (Digital Up Converter) process. Figure 20: Narrow band mode 10Gb Ethernet switch 10Gb Ethernet switch IFoIPTM L band plug IFoIPTM L band plug IFoIPTM L band plug IFoIPTM L band plug Space Center Optical Network RF to L converter L to RF converter L to RF converter RF to L converter · Narrow band filtering · Decimation · Digital down conversion · Framing, datation etc TM TM TC TC · Channel selection (2 x 80 MHz) · SAW filtering (80 MHz) · Gain Control (RF & BB) · Digitalisation Up to 2 TM & TC channels 4 x GbEthernet links E G S E Figure 21: Digital STFO synoptic diagram The existing optical network is used, without any modification, as an Ethernet transport network fully independent of the RF or BB aspects of the STFO, the management of the frames being made by COTS Ethernet switches. At each end, the frames received by the XFP modules (optical-electrical converter of the 10G Ethernet switch) are routed to their destination port, depending on their MAC addresses. On the receive side, these frames are directed to digital - analog conversion cards in order to reconstruct baseband or IF signals. These signals are then available on the client interface, to be directed to the EGSE. Baseband signals are directly usable and IF signals can be used either directly if the client equipment have an IF interface, or transposed in the original RF band by frequency converters. The IF over IP board can work in both acquisition and restitution modes, only the firmware and the IF plugs being different. c. System capacity The L-band acquisition plug converts the real signal in L-band in a complex baseband signal and digitalizes each of the two I and Q components of this signal on 14 bits words at a rate of 120 MHz. 7/9 A digital filter then corrects the flatness, the I/Q imbalance and lowers the frequency in a report ¾, to 90 Msamples/s. The resulting gross rate is therefore: 90.106 x 14 x 2 = 2.52 Gbps. The RF specifications of the STFO are given for channel a bandwidth of 10 MHz. For this 10 MHz reference channel, we can use a DDC (Digital Down Converter) with a decimation ratio of 8 and the resulting net data rate is then 2.52/8 Gbps or 315 Mbps. According to Fig. 19, this net bit rate of 315 Mbps corresponds to a gross bit rate of 315 x (1518/1460), i.e. 328 Mbps. A single 10GB-Ethernet link, which has a bit rate of 10,3125 Gbps, can so carry up to 15 RF channels of 10MHz when loaded to only half its capacity. This allows monitoring groups of satellites through a single Ethernet link. d. Received frames processing In Narrowband mode, packets are received and sorted out, and the samples of the different channels (with different bit rates) are stored in FIFO, then processed by DUCs (Digital Up Converter) after interpolation in order to restitute them at their original clock and frequencies so allowing to reconstruct the original signal’s spectrum. Figure 22: received frames processing principle It is necessary to process the latency variation of the packet arrival using the FIFO: Figure 23: Jitter compensation using FIFO buffer As mentioned on the previous figure, it is also necessary to synchronize the DAC frequency (of the receiver) to the ADC frequency (of the transmitter). This synchronization can be performed by several ways: - Distribution of a single clock through the network itself thanks to SyncE (Synchrounous Ethernet). But this would imply that all the network’s components accepts and process SyncE ; this would be a limitation in the system - Synchronisation protocols (IEEE1588, Ethercat, Powerlink, …). These protocols are used to provide synchronization of various network components in the time domain, with a resulting accuracy better than 100ns; nevertheless translated in terms of jitter, these values would induce a drastic degradation of the signal’s phase noise - GPS receiver on each site. This could be a good solution, but it necessitates the installation of such GPS receiver in each STFO site - Locking of the receiver clock on the transmitter clock. This solution requires more processing power on the Rx board, but it has the advantage to provide an autonomous synchronization solution, independent of the network. This last solution (Rx frequency locking on the Tx clock) can be implemented with the following algorithm families, Two Time Scale and Constant Clock: - Two time scale algorithm (cf. [2]) This algorithm has been introduced to synchronize the packet reception within ATM networks. It works with two independent modes in order to separate the compensation of the jitter and the synchronization of the clocks. Its contribution to the phase noise of the reconstructed RF carriers should be low, according to the simulations which have been done (see Fig.28). Figure 24: Two time scale algorithm - Constant clock algorithm (cf. [3]) This algorithm has the lowest impact on the phase noise, providing that the free run stability of the Rx clock is good. In this type of algorithm, during an acquisition phase, the rhythm of the packets arrivals in the FIFO is observed and quantified in order to extract the Tx clock component from the jitter component. 8/9 Figure 25: Constant clock algorithm Once the frequency has been computed, the Rx clock runs in a flywheel mode on this frequency, but the variations of the addresses of the FIFO continues to be evaluated within a specific window: - If the current address remains in this window, the variations represents only the jitter components and the Rx clock remains on flywheel mode on the current frequency. - If the current address goes outside from the window, this is interpreted as a frequency offset between the two clocks and a slow correction is applied to the Rx clock. Note that the FIFO length is greater than the window, so, even during this frequency change process, there is no overflow of the buffer and no packet is lost. Figures 26& 27: Long term clock resynchronization The objective could be to reach the following phase noise of the overall RF link: Figure 28: Phase noise objective IV) Research & Technology phase A Research & Technology phase, necessary to confirm the feasibility of this evolution (Digitalization and Transport part), has been performed in 2014 with Zodiac Data Systems (cf. [1]) to study this RF digitalization of the STFO based on a dedicated point to point Ethernet link. Taking into account the very simple architecture of the network and its deterministic character due to its closed structure and the absence of any router, the risk of packet losses is very low; this allows to use UDP instead of TCP as transport protocol in order to maximize the transmission rate by removing the control, acknowledgement and repeat procedures of TCP. The R&T has focused on the following technical points: - General design of the system - Network and signal clock synchronization - Identification and Time tag of the packets - Required Ethernet network performances (packet loss, jitter …) In 2015, a demonstration should be organized to characterize the clock recovery process and the quality of the reconstructed RF signal: - Phase noise - Signal to noise ratio - Spurious signals - Linearity (intermodulation) - Bit error rate of a transported modulated RF carrier V) Open network evolutivity The new STFO which has been described remains in the same “multi- point to point” star network. New opportunities could be proposed in the future by connecting it to other LANs or integrating it a larger, network. The UDP protocol, well adapted to high bit rate transmission in P2P or deterministic networks, is not adapted to open network architecture On the other hand, TCP, which is perfectly suited to these open networks, drastically reduces data rate compared to UDP. This would imply to use specific protocols like MPLS and/or InServ/DiffServ and very high data rate Ethernet technologies like 40Gb and 100Gb Ethernet as well as taking into account the confidentiality aspects of the transmission. 9/9 Figure 29: Example of LAN extension This will be done taking into account the new developments of the Ethernet technology, like 100Gb Ethernet (100G BASE-ER4,…), ciphering and protocol evolutions, as well as the security/ confidentiality aspects of the transmission. References [1] Michel Thomas (2014) : Etude R&T STFO de nouvelle génération. [2] Muhiyaddin. (1995). Adaptive Clock Recovery and Jitter Control in ATM Networks. [3] Ayissi-Manga. (2014). Rapport de stage. Bretteville l'Orgueilleuse: Zodiac Data Systems. Glossary ADC Analog to Digital Conversion BAF Batiment d’Assemblage Final BB Base Band CAN Convertisseur Analogique Numérique CDL Centre De Lancement CNA Convertisseur Numérique Analogique COTS Component On The Shelf DAC Digital to analog conversion DCO Dérouleur de Câble Optique DDC Digital Dow Converter DUC Digital Up Converter EGSE Electrical Ground Support Equipment EPCU Ensemble de Préparation des Charges Utiles FIFO First In First Out IF Intermediate Frequency IP Internet Protocol M&C Monitoring & Control MPLS MultiProtocol Label Switching RF Radio Frequency SNMPSimple Network Management Protocol STFO Système de Transmission par Fibres Optiques TC Telecommand TCP Transfer Control Protocol TM Telemetry UDP User Datagram Protocol YIG Yttrium Iron Garnet

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

The entry into service of C-band Telemetry at Airbus Test Centre: first result and way of improvement - Luc Falga - AIRBUS Operations – France Airbus is authorized to use S-band for Telemetry until end of 2015. In October 2011, the decision has been taken to move to C-band in 2013, to cope with Airbus development aircraft planning. The objective was a real challenge for 2 main reasons: C-band channel was not characterized in Airbus transmission environment and it was necessary to validate the propagation performance for flight tests uses. The selected solution is based on Coded Orthogonal Frequency Division Multiplexing (COFDM) modulation. There was no existing solution so it has lead Airbus Test Centre to develop its own equipment. The results of tests consolidate the choice of this modulation, given the high sensitivity to multipath of usual Frequency Modulation in the airport environment full of buildings, and aircrafts. A GO for Entry Into Service has been given end of December 2013. In 4 weeks during January 2014, 8 reception antennas and 12 development aircrafts have been modified and upgraded with the new C-band systems, ensuring an operating telemetry for the current campaigns. After more than a year, we can present the results of operations and potential axis for improvement. The new capabilities, particularly in terms of data rate, will enable us to move towards a change of format of the message telemetry from PCM to IP. We could take advantage of tools or applications already available: rate compression, video compression to increase the telemetry capabilities for the future.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Combining a Reed-Solomon Block Code with a Blind Equalizer: Synchronization and Bit Error Rate Performance Alexandre Skrzypczak, Grégory Blanc, and Tangi Le Bournault Zodiac Data Systems – 2 rue de Caen – 14740 Bretteville l’Orgueilleuse - France {alexander.skrzypczak, gregory.blanc, tangi.lebournault}@zodiacaerospace.com Abstract The performance of telemetry systems may be strongly affected by diverse sources of perturbations. Among them, multipath channels and transmission noise are the most critical. While the effects due to the multipath channels can be attenuated thanks to equalization, the effects of the noise are limited if forward error correction is used. This paper first proves that the combination of blind equalization and forward error correction can strongly improve bit error rates. The other objective of the paper is to show that reasonably powerful codes like Reed-Solomon codes are sufficient to enable quasi-error free transmissions in a large majority of propagation channel scenarios. Introduction In flight testing, and more generally speaking in telemetry, the data link is needed to be kept available during an entire mission. However, a mission is the succession of different aircraft manoeuvres. If one considers an airplane for instance, it is first motionless on its parking position, then slowly moves on to its take-off position (taxiing), then takes off and finally flights. An efficient telemetry is able to guarantee an available data link for each manoeuvre. However, in terms of physical analysis, each of the four aforementioned phases corresponds to a very different signal transmission scenario (in terms of transmission channel). In a parking context, the emitters and receivers are close to each other, leading to a high signal-to-noise ratio (i.e. the noise level is very low) but the transmitted signal may be subject to several permanent and high-levelled reflections on buildings or on the floor. The channel is seen as a frequency-varying multipath channel. For taxiing, the signal-to-noise becomes lower, the reflections are more attenuated but as the aircraft has a given speed, a Doppler shift and a Doppler spread may affect the transmission quality. In such a case, we get a slowly time-varying channel. For the take-off, the signal-to-noise still decreases, the channel still remains frequency-varying but as the aircraft speeds up, the channel becomes quickly time-varying. In a far flight context, the signal-to-noise ratio is very low, but the power of the reflected paths can be neglected and the aircraft appears motionless from the receiver point of view: the channel can be seen as a pure Gaussian channel. Channel coding is generally used if we want to limit the amount of transmission errors due to the additive Gaussian white noise (AWGN). Different coding strategies can be considered and they are described here below. This solution is very efficient for the far- flight channel scenario. However, if the signal is also altered by a multipath channel, the coding performance may be significantly degraded. Indeed, a large majority of coding processes are designed by considering a Gaussian perturbation while the Inter- Symbol Interference that is generated by a multipath channel may follow a different statistical distribution. Parallel to this, in the presence of noiseless multipath channels, an equalization process is very efficient in order to limit the number of transmission errors. However, as equalization never perfectly inverts the channel in practical cases, a residual distortion may still affect the data symbols, leading to residual transmission errors. Equalization is very efficient for parking scenarios where the noise level is negligible. The target of this study is to experimentally find conditions to guarantee a good transmission quality in terms of synchronization and bit error rate for all the transmission scenarios that were previously described. We suppose that by coupling a channel coding with reasonable performance (here a Reed- Solomon code) with the equalizer that was developed by Zodiac Data Systems, the residual errors induced by the equalization process can be corrected by the channel coding, even for reasonably low signal-to- noise ratios i.e. we can expect quasi-error free transmissions for all the channel scenarios. The paper is organized as follows: after a brief presentation of the Reed-Solomon code that is considered for the experimentations and of the ZDS equalizer, we then describe the experiment set up in a first time and the experiment results in a second time. We finally propose some requirements and draw our conclusions. Channel Coding and Equalization In this section, we first recall some generalities about channel coding. We then describe the channel code we considered for our experiments and we also briefly detail the blind equalizer that is developed by Zodiac Data Systems. 1. Channel coding In the literature, we can find different kinds of channel code. They are classically classified into three main families [1]: block codes, convolutional codes and iterative codes. 1.1. Block codes The block coding process consists in transforming a block (of bits or bytes) of size k into a block (of bits or bytes) of size n>k. Consequently, n-k bits of redundancy are added at the end of the data block. The ratio k⁄n is named coding rate. They are usually decoded by the syndrome decoding algorithm or the Chase algorithm. Among the block code family, we can put the light on particular codes: cyclic codes with the longest minimal Hamming distance, also called Reed-Solomon codes. Their particularity lies on the fact that they were designed to ensure an optimal capability of error correction into a block code. And as they are cyclic codes, the decoding algorithms are based on polynomial calculation, which guarantees a reasonably low complexity. 1.2. Convolutional codes Convolutional codes are another well-known family of codes. They are based on the following idea: the n-k redundancy bits are processed by a binary operation on a given set of bits. These bits are obtained from a sliding window of size k on the original data stream. The decoding of this convolutional code is made thanks to the well-known Viterbi algorithm. Its principle consists in finding the most probable path in the code treillis by minimizing the Hamming distance between the received binary sequence and the possible candidate sequences. 1.3. Iterative codes The last class of forward error coding is the iterative codes like the Turbo-Codes and the LDPC codes. Although the encoders for theses codes are very easy to implement (two recursive and systematic convolutional codes with an interleaving for the Turbo-Codes and a block code with a parity check matrix with a few number of zeros for the LDPC), the efficiency of this kind of codes lies in the iterative decoding. For Turbo-Codes, the BCJR algorithm calculates extrinsic information that iteratively feeds another BCJR decoder in order to tune the probability to decode a zero or a one in a finer way. LDPC decoding is made thanks to the sum-product algorithm whose principle is to iteratively send messages in the Tanner’s graph of the code in order to get a precise evaluation of the probability to decode a zero or a one. This class of code is known to get quasi-optimal performance with respect to the Shannon’s limit. They are also now proposed in the latest release of the IRIG 106 standard [2] and, more generally speaking, in a large number of telecommunication standards. 1.4. Code choice In this experiment, we decided to use a Reed- Solomon (RS) code (the one of the CCSDS) as we want to prove that in a large majority of channel scenarios, a reasonably performing channel coding is sufficient to get a quasi-error free transmission when combined with an equalizer. 2. RS code from CCSDS In the following, we use the RS code that is proposed in the CCSDS standard. It is fully described in [3]. The basic features are the following ones: • k = 223 bytes and n = 255 bytes. This is a systematic code. • The raw code rate is then r=k⁄n≈0.87. • This code is theoretically is able to correct t=((n-k))⁄2=16 false bytes into a given block. • In CCSDS, it is possible to introduce an interleaving of size I (with I∈{1,2,3,4,5,8}) consisting in grouping I data blocks of size k in some way to compute the code redundancy. • The beginning of the code block is detected by a binary sequence called ASM (Attached Sync Marker). Its good detection and synchronization is a prerequisite for a good decoding process. The synchronization on this ASM is also studied in this paper. • In the following experiment, the 4-bytes ASM is the one of the CCSDS i.e. Ox 1A CF FC 1D. • The data frame after coding is finally represented in Fig. 1. Fig. 1: organization of the CCSDS frame 3. ZDS Equalizer Zodiac Data Systems recently propose an equalizer as an option of its RTR (Radio Telemetry Receiver). The basic features are the following ones: • This is a multi-criterion (including a basic CMA) and an iterative algorithm. • For the moment, it only works with the classical PCM/FM modulation that is used for in instance in the IRIG106 standard. • this is an adaptive algorithm i.e. it tracks the channel variations. In other words, it is able to correct the degradations brought by time-varying and frequency-varying channels. • It is well-adapted for low bit rates (around 1 Mbps) but its performance does not degrade for bit rates up to 4 Mbps. Additional information is given in [4] so as its theoretical performance. It is then shown that, in very different channel scenarios, this iterative equalizer offers a considerably better data link availability when compared to standard demodulation or to a CMA algorithm. Experiment setup In this section, we describe the experimental testbench we used, so as the equipment and their configurations. We also propose the channel models we considered. 1. Testbench In order to fairly compare the gain that is brought by the RS code, the equalizer and the association of both, we choose to record PCM/FM signals that were distorted by different channels and noise power. This signal is then demodulated by 4 different methods: equalizer on/off and RS decoder on/off. Doing so, t the different demodulation processes run over the same noisy and distorted channel so that we can affirm that the different performance gain we might observe is independent of the distortion statistics. To simulate a signal transmission that is close to real on-field situations, we first used a SMBV 100A from Rohde and Schwarz to simulate data transmission with a PCM/FM modulation. The useful data is a binary counter from 0 to 255 that loops back to 0 after it reaches the value 255. The redundancy is then derived from the coding techniques described in [CCSDS]. In order to study the influence of the coding interleaving, we chose 2 different interleaving depths I=1 and I=5. After ASM insertion, 2 final bit streams (for I=1 and I=5) are then loaded in the SMBV. This data stream is then applied as a PCM/FM modulation with an index 0.7. We also set 2 different bit rates: 1 Mbps and 4 Mbps. We used an AMU 200 from Rohde and Schwarz as channel simulator and noise generator. The signal that is at the SMBV baseband output is then distorted by static or dynamic multipath channels and noise is also added by the AMU. After that, the resulting signal is re-injected into the SMBV for a RF transposition at 70 MHz. The resulting signal is then recorded and stored by a RSR (Radio Signal Recorder), which is a signal recorder developed by ZDS. The testbench for signal generation is then shown in Fig. 2(a). (a): testbench for signal generation and recording (b): testbench for estimation for BER and synchronization performance Fig. 2: used testbenches The RSR is also able to replay a recorded signal. The replayed signal then feeds a RTR that demodulates the signal (with/without RS decoder – with/without equalization) and recovers the received binary frames. These binary frames are stored in files which are finally post-processed by Matlab in order to estimate ASM synchronization performance and bit error rates. This testbench is displayed on Fig. 2(b). 2. Synchronization and BER The synchronization of the bit stream is made by a Frame Sync. Frames are identified thanks to a sync pattern and the Frame Sync parameters are the following ones: Parameter Definition Value Bit Slip Size variation of the search window where the sync word is expected 0 Sync threshold Number of errors allowed in the sync word 3 Check to lock Number of consecutive frames correctly synchronized to be declared locked 0 Lock to search Number of consecutive frames where the sync word is not found before restarting the search of the sync word 0 With the transmissions parameters (data rate, number of bits in the frame …), the number of transmitted frames can be easily estimated. The RTR can also give us the number of binary frames that are processed. As a consequence, the percentage of lost frames can be easily derived. The BER is then calculated on the frames that were correctly synchronized (the lost frames are not considered at all). In the following, we consider that we are in a quasi-error free transmission if BER<10 -6 . 3. Channel models 3.1. Parking channel As previously described in the introduction, transmitters and receivers are close to each other. Consequently, we can expect a high Eb⁄N0 but the reception can be degraded by several high-powered signal reflexions. In addition, the transmitter is supposed to be motionless, the channel can consequently be considered as static. We then modelize this channel by the model proposed by M. Rice in [5], called AFB2 channel. Even if AFB2 was not proposed to represent such a transmission scenario, it fits with a lot of channel soundings we made in a parking context. All paths have a constant phase and no Doppler as the transmitter is motionless. The characteristics of the paths are given in the following table. Path number Relative power (in dB) Delay (in µs) Doppler (in Hz) Path type 1 -16 0 0 Constant phase 2 0 0.05 0 Constant phase 3 -9 0.1 0 Constant phase 4 -9 0.49 0 Constant phase 5 -9 0.73 0 Constant phase 6 0 0.78 0 Constant phase 7 -16 0.87 0 Constant phase 8 -15 0.92 0 Constant phase 3.2. Taxiing model In a taxiing channel, the contribution of reflected paths is attenuated by the fact that the distance between transmitters and receivers increases. However, as the aircraft moves, the channel becomes time and frequency varying. We made some channel soundings at the Airbus’ airport at Toulouse-Blagnac [6] and from these experiments, we have modelled a taxiing scenario that might be problematic for classical FM demodulation. The taxiing model is described in the following table. The main path follows a Rice distribution while the reflected ones follow a Rayleigh distribution. The channel is simulated for a transmission over the S-Band, which means that the 100 Hz Doppler corresponds to a transmitter speed equal to 50 km/h. Path number Relative power (in dB) Delay (in µs) Doppler (in Hz) Path type 1 0 0 100 Rice 2 -8 2.5 100 Rayleigh 3 -27 8 100 Rayleigh 3.3. Take-off channel From the channel soundings at Airbus [6], we have derived some models for the take-off case. It can be found that the reflected paths are low-powered but have a wide Doppler spectrum. This scenario may be very problematical for multicarrier modulation like COFDM as a wide Doppler spectrum may generate important intercarrier interference. For monocarrier modulations like PCM/FM, the effect of Doppler spread is very limited. In addition, the power of the reflected paths is low enough so that the classical FM demodulation is not affected and may guarantee a quasi-error free transmission if the noise level is sufficiently low, which has been confirmed by Matlab simulations, not displayed here. As a consequence, no experiment on this channel has been proposed. 3.4. Far Flight channel In such a context, we suppose that the aircraft is far enough so that the reflected paths are very negligible. However, as the distance between transmitter and receiver is important, the value of Eb⁄N0 is very low. To simulate this channel, we use the AMU 200 as a noise generator by switching off the multipath fading. 3.5. Benchmark channel In order to characterize the performance of our equalizer, we proposed a synthetized channel for which the FM demodulation outputs a lot of errors and for which the equalizer offers a quasi-error free transmission. In the frequency point of view, this channel is seen as a deep fading in the signal bandwidth and this fading periodically crosses the signal bandwidth in a short period of time. As a result, this signal is highly frequency selective (because of the deep fading) and highly time selective (because the channel characteristics quickly vary). This channel is modelled as described in the following table. Path number Relative power (in dB) Delay (in µs) Doppler (in Hz) Path type 1 0 0 0 Static 2 -1.5 2 30 Constant phase Experimental results In this section, we show the results of our lab experiments using the testbenches displayed on Fig. 2 and 3. We chose to give the results only for a bit rate equal to 1 Mbps and for I=5 as we did not notice any major differences (except if mentioned) in the performance with a data rate of 4 Mbps and with I=1. 1. Far Flight scenario The results of our experiments are given in Fig. 3. They show that equalization does not improve the performance in a Gaussian scenario. This can be easily explained as the equalizer is designed to invert the convolutive effect of the channel and is not able to correct the additive noise which is the only source of degradation in a Gaussian channel. However, RS decoding strongly improves the BER, as expected. For low Eb⁄N0 values, we observe a strong improvement of the synchronization performance (represented by the percentage of lost frames) by using equalization, i.e. we can retrieve more frames. For further investigations, it would be interesting to develop a way to detect the presence of multipath in order to activate the equalizer when it is useful. 10 15 20 25 30 35 40 0 1 2 3 4 5 6 7 8 Eb/N0 (dB) percentageoflostframes No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (a) percentage of lost frames 10 15 20 25 30 35 40 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 Eb/N0 (dB) BER No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (b) BER Fig. 3: experiment results for the Far Flight scenario 2. Parking scenario The results of our experiments are given in Fig. 4. 10 15 20 25 30 35 40 0 10 20 30 40 50 60 70 80 90 100 Eb/N0 (dB) percentageoflostframes No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (a) percentage of lost frames 10 15 20 25 30 35 40 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 Eb/N0 (dB) BER No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (b) BER Fig. 4: experiment results for the Parking scenario In this scenario, we observe that without equalization, it is very difficult to lock the synchronization process. This can be explained by the fact that if the sync word is altered for one frame, it remains true for all the frames as it is a static channel. We also show that equalization drastically improves the synchronization and consequently the BER. We observe that in terms of BER, blind equalization allows a quasi-error free transmission for Eb⁄N0 > 27 dB. However, we need to add more points in the graph to affirm that RS decoding allows the correction of the residual errors after equalization. It can be also observed that RS decoding without equalizer does not improve the BER results at all. 3. Benchmark scenario The results of our experiments are given in Fig. 5. 10 15 20 25 30 35 40 0 10 20 30 40 50 60 70 80 90 100 Eb/N0 (dB) percentageoflostframes No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (a) percentage of lost frames 10 15 20 25 30 35 40 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 Eb/N0 (dB) BER No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (b) BER Fig. 5: experiment results for the Benchmark scenario The association of blind equalization and RS decoding allows quasi-error free transmissions for Eb⁄N0 > 20 dB. We here confirm that equalization lowers the error floor and RS decoding is able to correct residual errors after equalization. As the channel is dynamic in this case, it proves the ability of the algorithm to well track the quick channel variations, even for reasonably high values of noise power. Note again that RS decoding without equalization is totally inefficient. Equalization also brings a great reduction of lost frames. 4. Taxiing scenario The results of our experiments are given in Fig. 6. This experiment shows that the error floor for EQZ+RS takes place between 10 -3 and 10 -4 with no real gain brought by the RS decoding. This proves that the channel is basically hard to perfectly equalize. The equalization error is too high-powered so that the channel decoding may correct it. Using higher data rate also seems to degrade the performance as equalization algorithm is limited by the high computational resources needed. A solution is to increase the number of iteration in the equalization algorithm in order to lower the power of the equalization error or to use a more efficient channel code. We can guess that a concatenated code (RS + convolutional code with an interleaving between them) is sufficient. If not, a LDPC code could be able to reach a quasi-error free performance. Conclusions From this study, we can extract interesting information about equalization and channel decoding: - Equalization always improves the synchronization process, i.e. we can retrieve more frames by using a blind equalizer, - In the presence of a multipath channel, RS coding alone does not improve the BER performance, - In the same context, blind equalization alone always improves the BER, - For a large set of channel scenarios, the combination of blind equalization and RS decoding allow quasi error free transmissions for reasonable values of Eb⁄N0, - For some complex channel models, the residual distortion after blind equalization is too important so that the RS algorithm could correctly decode the information. More powerful codes would improve the system performance if they suit the residual distortion. 10 15 20 25 30 35 40 0 10 20 30 40 50 60 70 80 90 100 Eb/N0 (dB) percentageoflostframes No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (a) percentage of lost frames 10 15 20 25 30 35 40 10 -4 10 -3 10 -2 10 -1 Eb/N0 (dB) BER No EQZ - No RS EQZ - No RS No EQZ - RS EQZ - RS (b) BER Fig. 6: experiment results for the Taxiing scenario References [1] J. Proakis, Digital Communications. McGraw-Hill. 2001 [2] IRIG 106, Transmitter and Receiver Systems, 2015. [3] CCSDS 101.0-B-6, Telemetry Channel Coding, October 2002. [4] A. Gueguen and D. Auvray, “Multipath Mitigation on an Operational Telemetry Link”, in ITC 2011, October 2011. [5] M. Rice, “Multipath Modeling and Mitigation Using Multiple Antennas (M4A)”, Technical report, 2014. [6] A. Skrzypczak, A. Thomas, and G. Duponchel, “Paradigms Optimization for a C-Band COFDM Telemetry with High Bit Efficiency”, in ITC 2013, October 2013.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Limitation of the 2-Antennas Problem for Aircraft Telemetry by Using a Blind Equalizer Alexandre Skrzypczak, Grégory Blanc, Tangi Le Bournault, and Jean-Guy Pierozak Zodiac Data Systems – 2 rue de Caen – 14740 Bretteville l’Orgueilleuse - France {alexander.skrzypczak, gregory.blanc, tangi.lebournault,jeanguy.pierozak}@zodiacaerospace.com Abstract The emission of the telemetry signal is required over minimum two different antennas to keep the telemetry link available during a maneuver of a flying object. If nothing is made at the transmitter side, the telemetry link can be fully lost as both signals may have an opposite phase. We here propose a simple solution based on delay diversity to solve this problem. The basic idea is to introduce a delay between both emitted signals to guarantee a non-destructive signal recombination. We then exploit the ability of the blind equalizer developed by ZDS for the PCM/FM modulation to correctly equalize this signal and to recover the initial data. This solution does not require any modification of the on-board and floor set-ups except the introduction of a delay line between both transmitting antennas. It also does not need any pilot sequence and is natively robust to multipath perturbations. Introduction In order to keep a telemetry link available during a maneuver of a flying object (especially an airplane), the emission of the telemetry signal is required over two (or more) different antennas. This signal has to be sent over the same central frequency for an optimal spectral occupancy. However, if nothing is made at the transmitter side, the well-known “2- antennas problem” arises. Indeed, if the receiving antenna points at the aircraft with certain angles, the telemetry can be fully lost as the propagation of both signals is such that they have an opposite phase. A first simple solution is then to significantly attenuate the power of one of the both signal in order to ensure a non-destructive signal recombination. However, this solution has a great cost on the overall link budget. Another solution is to send both signals through different antenna polarities which may guarantee the recovery of the signal of interest if the receiving antennas are correctly polarized. But the polarity of the receiving antennas has to be perfectly tuned in order to avoid cross-polarization interference that may significantly degrade the bit error rate. Recently, it has been proposed a solution based on Space-Time Coding (STC) for the SOQPSK modulation [1]. Its performance has also been tested in real environments [2-3] showing its ability to solve the 2-antennas problem while maintaining a good telemetry performance. However, this solution has some drawbacks like the necessity to change both transmitters and receivers in order to manage this STC solution. The data rate is also reduced as it is necessary to insert in the data flow a pilot sequence for signal synchronization. The robustness of this solution with respect to multipath environments is also not really proved. We here propose a simple solution based on delay diversity to manage the 2-antennas problem. The basic idea is to introduce a small delay between both emitted signals in order to guarantee a non- destructive signal recombination. By doing this, the receiving antenna sees the transmitted signal as if it passes through a transmission channel with 2 paths. We then exploits the ability of the blind equalizer developed by Zodiac Data Systems [4] for the PCM/FM modulation to correctly equalize this signal and then to recover the initial data. This equalizer is sold as an option in the ZDS telemetry receiver, the RTR (Radio Telemetry Receiver). This solution for daisy pattern mitigation requires a limited modification of the on-board and floor set-ups except the introduction of a delay line between both transmitting antennas. It also does not need any pilot sequence in the data flow and is natively robust to multipath perturbations. This paper shows the lab experiments we made to ensure the feasibility of this solution. It also gives some clues to well tune the delay between both antennas and, if necessary, the attenuation of the second path. We finally measure the system performance in the presence of a multipath environment. Two-antennas problem 1. Problem statement As previously explained in the introduction, the two- antenna problem is the consequence of the fact that telemetry signals have to be sent over two different antennas in order to avoid the telemetry lost. So, the transmitted signals arrive at the ground station with different phases due to the difference of time propagation delay. This phase difference also depends on the value of the carrier frequency. In some configurations, this phase difference has no great impact on the received signal. But, for certain angles of observation between the ground antenna and the aircraft, it may happen that the transmitted signals have an opposite phase: the received signal might be then cancelled or nearly cancelled. This phenomenon is illustrated on Fig. 1. Fig. 1: illustration of the two-antenna problem. In some configurations, the overall received signal can be reinforced while in others ones, it can be fully cancelled. It is then possible to define the set of the antenna pointing angles for which the telemetry might be lost. The shape of this diagram looks like an antenna daisy pattern. An example of this kind of diagram is given in Fig. 2 and is extracted from [2]. Fig. 2: radiation diagram for the emission of two telemetry signals. 2. Possible solutions A basic solution is to significantly reduce the power of the signal on one of the antennas. Indeed, by doing so, even if the signals are in opposite phase when they recombine, the amplitude difference ensures that there is no signal cancellation. It is regularly admitted that we avoid the two-antenna problem if a difference of amplitude between 6 and 10 dB is inserted. Another solution has been proposed by M. Rice in [1- 2], based on space time coding. This solution is only proposed for the SOQPSK modulation. The basic principle is to avoid the destructive recombination of the transmitted signal by creating signal diversity i.e. the signal emitted by the first antenna sends the original binary data flow while the signal on the other antenna carries a different version of the original binary data flow. As finally two different data flows are transmitted on each antenna, the probability of destructive signal recombination at the receiver side is almost zero. As the sum of both signals is received, a dedicated demodulator has to be implemented in order to retrieve the original data stream. To do so, a pilot sequence of 128 bits is inserted after each block of 3200 bits. This sequence is used to detect the beginning of data block. After this detection, an estimation of the signal frequency offset is performed in order to synchronize the signal in the frequency domain. Then, an equalization of the signal, based on the minimum mean square criterion, is performed. A space-time decoding, based on treillis decoding, is finally made in order to obtain the original data stream. Simulations shows that the overall system performance is good, which has also been confirmed by on-field tests in the Air Force Flight Test Center at Edwards [2] even if more advanced tests made in [3] also show that this solution remains sensible in the presence of multipath channels. Finally, even if this STC-based solution seems to be attractive in terms of performance and limitation of the two-antenna problem, some major drawbacks arise: - A specific modulator is needed to perform the space-time coding, - A specific demodulator is needed to correctly demodulate the signal, - The original data stream has to be modified in order to insert the pilot sequence, - This pilot sequence reduces the bitrate by 4 %, - There is no mention of the maximal data rate that can be used, - The transmission remains sensible to channel multipath effects. Blind Equalizer-based solution 1. Description of the proposed solution We here propose a solution for the limitation of the two-antenna problem that is based on delay diversity. The basic idea is to send the same signal over both antennas but with a fixed delay between them. For instance, antenna 1 sends the useful signal while antenna 2 sends the signal . The time difference between both antennas can be easily obtained thanks to a difference of cable length between the transmitter and the antennas or thanks to a simple delay line. Consequently, if the signal is transmitted on a noiseless perfect channel, the signal that is received on the ground station can be written as follows: where: - is a complex-valued coefficient. - is the value of the delay between both antennas The coefficient reflects the fact that a given and fixed power attenuation and phase difference can be set between both antennas. If , this means that both antennas transmit the signal with the same power. means that the signal on the second antenna is transmitted with half the power of the first antenna. From Eq. (1), we obtain: From this latter equation, we can conclude that transmitting a signal with delay diversity on antennas is strictly equivalent to transmitting a signal over a multipath channel (here with one path only). This is illustrated on Fig. 3. Fig. 3: effect of the delay diversity from the receiver point of view As a consequence, as the receiver sees the delay diversity as a multipath channel, it seems to be interesting to exploit the properties of the blind equalizer that is developed by Zodiac Data Systems and whose performance is widely described in [4]. The basic idea behind this is to equalize the signal that arrives at the ground station in order to correctly demodulate the signal afterwards. Note finally that this solution is only valid for the PCM/FM modulation as the blind equalizer is only developed for this modulation for the moment. This solution is very similar to the one consisting in significantly attenuate the power of one of the antennas (in the following, this solution will be referred by classical solution). Instead of attenuating up to 10 dB in order to guarantee a possible demodulation, we hope that the blind equalizer will allow an important reduction of the power attenuation of the second antenna. 2. Laboratory testbench For the laboratory testbench, we used: - A SMBV 100A from Rohde & Schwarz as signal generator - A AMU 200 from Rohde & Schwarz as channel simulator - A RTR (Radio Telemetry Receiver) from Zodiac Data Systems with a blind equalizer for signal demodulation and BER evaluation. The testbench synoptic is displayed on Fig. 4. Fig. 4: laboratory testbench for validation of the proposed solution. From this testbench, we try to estimate the best set of parameters so that the signal could be perfectly demodulated after equalization. In other words, we try to find the best power attenuation and the best delay for which the BER remains equal to zero after equalization. 3. Parameters settings The AMU 200 channel simulator allows a lot of different channel configurations. We first suppose a very simple configuration where the transmission is made with both antennas and without multipath effects or noise. We also suppose that both transmitting antennas are motionless so that there is no Doppler effect to consider. In order to significantly improve the budget link compared to the classical solution, we choose to set the power attenuation to 2 dB. The parameter has an impact on the frequency response the equivalent channel . Indeed: As a consequence, has an influence on the position of the fading in the signal bandwidth. Then, when a value of that leads to a BER equal to zero is found, we must check that this is the case for all the values of . The path table in the AMU 200 is then configured as follows: Table 1: configuration of the AMU 200 Path Profile Path loss (dB) Delay (µs) Const. Phase / Deg Res. Doppler Shift / Hz 1 Static path 0 0 0 0 2 Pure Doppler 2 variable variable 0 We then derive the value of (delay of path 2) so that the BER is equal to zero for all phase differences (described by the parameter Const. Phase / Deg in the AMU 200). This value is estimated for different values of bit rates. The maximal tested bit rate is 4 Mbps as it is, for the moment, the maximal bitrate that is accepted by the equalizer. The results of these experiments are summed up in Table 2. Table 2: results of the experiments on a basic configuration for = 2 dB. Data rate 500 kbps 1 Mbps 2 Mbps 3 Mbps 4 Mbps (µs) 5 3 2 1 1 We here show that it is possible to derive a set of parameters on a large range of useful data rate so that the equalized signal can be perfectly demodulated. 4. Influence of multipath channels This section aims at showing that the RTR keeps an ability to correct the effects of multipath channels even in the presence of a second LOS, due to the signal propagation from the second antenna. To do so, we suppose that one reflection affects the signal propagation from antenna 2 and two reflections for the one from antenna 2. We then get the case described in Fig. 5, derived from Fig. 3. We also use the same testbench as in Fig. 4. Fig. 5: Channel modelization for the study of the influence of multipath channels The path table in the AMU 200 is then configured as follows: Table 3: configuration of the AMU 200 Path Profile Path loss (dB) Delay (µs) Const. Phase / Deg Res. Doppler Shift / Hz 1 Static path 0 0 0 0 2 Pure Doppler 2 0 0 3 Rayleigh 10 0.7 0 0 4 Rayleigh 15 + 1 µs 0 0 5 Rayleigh 20 + 1.5 µs 0 0 The channel representation here above is chosen to prove the ability of the equalizer to correct the multipath effects and is not derived from any channel soundings. In table 3, the value of is the one found in Table 2 for a path loss of 2 dB, i.e. we keep the error-free configurations after equalization to check if theses configurations are also sufficient in the presence of multipath channels. We then obtain the following results: Table 4: experiment results in the case of multipath channels Data Rate Experiment result 500 kbps The BER remains equal to 0 in this case. The multipath effects are then corrected. 1 Mbps The BER remains equal to 0 in this case. The multipath effects are then corrected. 2 Mbps In this case, residual errors are still present. We get an error-free transmission if we slightly change the configuration. For example if we want to keep the same attenuation for the second antenna, the value of has to be around 1.5 µs. Alternatively, if we want to keep the same value of , the attenuation must be around 4.5 dB. 3 Mbps The BER remains equal to 0 in this case. The multipath effects are then corrected. 4 Mbps The BER remains equal to 0 in this case. The multipath effects are then corrected. We here show that in the large majority of considered data rates, the previous error-free configurations also allows the correction of multipath effects. When the data rate is 2 Mbps, the configuration has to be slightly modified. Another solution would be to oversize the value obtained in Table 2 so that the equalizer could be able to correct the second LOS and multipath at the same time. Note also that the multipath environment is time-varying and it could happen that the equalizer is unable to correct more complex channel environments. Conclusion We here set up some laboratory experiments that proved the ability of the equalizer in the RTR to limit the daisy patterns due to the 2-antennas problem in the case of a PCM/FM transmission. This solution also presents very interesting aspects: • There is no need to modify the modulator, • At the receiver side, the RTR must be equipped with an equalizer, • There is no need to insert any additional pilot sequence in the data stream, • The system also allows the correction of multipath channels, • Slight modification of the on-board set-up: additional cable length or programmable delay line. After this lab study, the following step would be to evaluate this solution in a real context and to make on-site measurements. References [1] M. Rice, “Space-Time Coding for Aeronautical Telemetry: Part I – System Description”, in ITC 2011. [2] M. Rice and K. Temple, “Space-Time Coding for Aeronautical Telemetry: Part II – Experimental Results”, in ITC 2011. [3] K. Temple, “Performance Evaluation of Space- Time Coding on an Airborne Test Platform”, in ITC 2014. [4] A. Gueguen and D. Auvray, “Multipath Mitigation on an Operational Telemetry Link”, in ITC 2011.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference A Gaussianization-based performance enhancement approach for coded digital PCM/FM Guojiang Xia, Xinglai Wang, Kun Lan Beijing Institute of Astronautical Systems Engineering, Nandahongmen Road 1, Beijing, China Abstract: The BER performance of the coded digital PCM/FM telemetry system is depended on the accuracy of the input likelihood metrics, which are greatly influenced by the click noise. This letter presents a Gaussianization approach to weaken the influence of the click noise. The outputs of the limiter/discriminator are first modeled by a Gaussian mixture model, whose parameters are estimated by expectation maximization algorithm, and then the amplitudes are adjusted by a proposed Gaussianization filter so that they become more accurate as likelihood metrics. When (64, 57)2 TPC is applied, simulation results show the coding gain is 0.8 dB at 10-4 BER level. Keywords: PCM/FM, limiter/discriminator, Gaussianization, turbo product codes, LDPC 1. Introduction Pulse coded modulation/frequency modulation (PCM/FM), which has the advantages of anti-flame, anti-polarization anti-multipath fading and anti-phase interference, is a commonly deployed technique in a variety of telemetry areas and other applications. Many forwards error-correction (FEC) codes, such as convolution codes, Reed-Solomon (RS) codes, turbo product codes (TPC) and low density parity check (LDPC) codes [1-4], are employed to enhance the bit error rate (BER) performance of the digital PCM/FM telemetry system. Because of its simplicity, limiter/discriminator (L/D) is often used for the demodulation of digital FM system. However, it is well known that the noise in the demodulated signal at the output of the L/D becomes impulsive when the carrier-to-noise power ratio decreases below about 10dB, even the channel is additive Gaussian white noise (AWGN) channel. The most famous description of this kind of noise was proposed in 1963 by S. O. Rice [5] who regarded the noise as the sum of two related component: approximate Gaussian noise and a kind of impulsive noise, namely the so-called click noise. The performance of all soft-input and soft-output (SISO) decoding algorithms, such as the Viterbi decoder, the Chase decoder and the belief propagation decoder, depend on the accuracy of the soft-input likelihood metrics. The accurate likelihood metrics are easy to be obtained when the amplitude distribution of the noise in the soft-input signal is known, for example, Gaussian noise. However, if the amplitude distribution of the noise is unknown or hard to know, the accurate likelihood metrics are hard to get. In practical coded PCM/FM system, the decoders of the FEC codes adopt Gaussian assumption for the sake of simplicity. This is not optimal for the click noise background and decreases the SISO decoding performance. Many efforts which are reported in [6] have been done to improve the SISO decoding performance of the coded digital FM telemetry system with L/D. Most of them are based on Rice’s click model and intend to detect and eliminate the click noise. However, it is very difficult to eliminate the click noise completely because of its randomness. Even if the click noise is eliminated completely, the performance of the Gaussian assumption SISO decoder is still not good because the approximate Gaussian noise component in Rice’s model is essentially non-Gaussian. In this letter, the noise in the signal at the output of the L/D is not regarded as the sum of the approximate Gaussian noise and the click noise, as described in Rice’s model, but is regarded as a kind of non-Gaussian noise as a whole. A Gaussianization approach is proposed to convert the probability distribution of the non-Gaussian noise to be closer to Gaussian, so that the likelihood metrics obtained by the SISO decoders in the digital PCM/FM system with L/D become more accurate and the BER performance of the system is improved. The reminder of this letter is organized as follows. Section II gives a brief review on the coded digital PCM/FM telemetry systems with L/D. In section III the Gaussian mixture density (GMD) model and the expectation-maximization (EM) algorithm as well as the Gaussianizing filter which are used in the Gaussianization scheme are introduced. Section IV presents the proposed Gaussianization approach in the coded digital PCM/FM telemetry systems with L/D. Section V gives the simulation results of the proposed approach, when the TPC codes and LDPC are employed as FEC codes. Finally conclusions are drawn in section VI. 2. Review on coded digital PCM/FM system In this section, the coded digital PCM/FM telemetry system with L/D is reviewed. The model of the considered system is shown in Figure 1. Figure 1: Model of PCM/FM telemetry system ETTC 2015– European Test & Telemetry Conference In the coded digital PCM/FM telemetry system, the telemetry data is first encoded by a PCM encoder and then is encoded by a FEC encoder. The FEC encoder can be an encoder of RS codes, TPC codes or LDPC codes. The pre-filter in this system is used to eliminate the inter-symbol interference and improve the band efficiency. L/D is adopted to be the demodulator, which is followed by a FEC SISO decoder. The modulated FM signal SFM(t) is:    cosFM c FMS t A t K f t dt     [1] Where f(t) is the modulate signal which is the output of the pre-filter, ωc is the carrier frequency, A is the carrier amplitude, and KFM is the frequency deviation constant which is decided by the hardware circuit. The structure of the L/D is as depicted in Figure2:  iS t  dS t  oS t Figure 2: Model of PCM/FM telemetry system The L/D is composed of a differentiator and an envelope detector. The input signal of the differentiator is SFM(t) in equation [1], and the output of the differentiator Sd(t) is:      sind c FM c FMS t A t K f t t K f t dt         [2] From equation [2] it is known that Sd(t) is the differentiation of SFM(t). The envelope of Sd(t) is proportional to f(t). The output signal of the envelope detector excluding the direct current is:    o d FMS t K K f t [3] Where Kd is a constant decided by the L/D circuit. From equation [3] it is known that f(t) can be recovered. 3. Gaussianization approach The Gaussianization approach is an important technique in many signal processing areas and detection areas, such as non-Gaussian autoregressive process [7] and speech processing. In these areas, the noise background is usually assumed to be Gaussian for the sake of simplicity. However, a wide variety of signal probability distributions are non-Gaussian. The mismatch between the assumption and the actual distribution results in a poor performance of the match filter, correlation test, maximum likelihood decoding, etc. However, utilizing the statistics characteristic of the non-Gaussian background, the probability distribution of them can be converted to be more “Gaussian-like”. In other words, the amplitudes of the background noise are adjusted so that the probability distribution becomes more similar to Gaussian distribution than before. Then the performance can be improved. The procedure of a typical Gaussianization approach is as follows. First the probability distribution function of the non-Gaussian background should be fitted by a non-Gaussian probability model [7]. Then the parameters of the non-Gaussian probability model should be determined by the parameter estimation algorithms [8]. Finally, according to the estimated parameters, the amplitudes of the non-Gaussian background are adjusted by a Gaussianization processing module so that the probability distribution is Gaussianized. There are many non-Gaussian probability models, such as the Gaussian mixture density (GMD) model, the class-A model, the K-distribution model and so on [7]. The GMD model, which is an effective model for fitting varieties of non-Gaussian probability distributions, is adopted in this letter:     1 1 | , , 1 M M i i i i i i i f s f s         [4] Where s is the signal with non-Gaussian background noise, f(s) is the GMD model of s, fi(s) are the Gaussian probability distribution functions with different mean i and variance i, M is the order of the GMD model, namely the number of fi(s), and i is the mixture parameter which denotes the relative weighting of each fi(s). M corresponds to the statistics characteristic of s. The bigger the M is, the more accurate the GDM model is. However, big M means high computational cost. Therefore the value of M is usually a trade-off between the computational cost and the accuracy in practice. In this letter M is set to be 2, which is the simplest case. Therefore the GDM model is a 2-order one:        | , 1 | ,B B B I I If s f s f s        [5] Where fB(s) is a Gaussian probability distribution function with mean B and variance B, and fI(s) is a Gaussian probability distribution function with mean I and variance I. Compared with Rice’s click model, fB(s) describes the random property of the non-Gaussian background noise, while fI(s) describes the impulsive property of the non-Gaussian background noise. Obviously the parameter group g = [, , ] of the GDM model should be decided by the statistics characteristic of s. There are many estimating approaches that can be used to get the parameter group g, for example, the expectation maximization (EM) algorithm[8], the penalized maximum likelihood estimation algorithm, the indirect least squares estimation algorithm for cumulant generating function, and so on [9]. As a widely used and high efficiency algorithm, EM algorithm is adopted to be the parameter estimation algorithm in this letter. EM algorithm is an iterative algorithm, which needs initial values of parameter group g. The initial value of g can be set by experience or according to the statistic characteristic of s. The Gaussianization processing module in this letter is the so-called Gaussianizing filter, which is proposed in [9]. The function of a Gaussianizing filter is to adjust the amplitudes of the input signal according to the estimated parameters obtained by the estimation algorithm, namely strengthen the smaller and weaken the bigger, so that the amplitudes distribution of the output signal becomes closer to Gaussian. Two Gaussianizing filters, U-filter and G-filter, are mentioned in [9]. The Gaussianizing filter proposed in this letter is a revised version of the G-filter, which is discussed in detail in next section. It should be noticed that after the Gaussianization approach the information in the original signal should be remained, or the performance is still poor. In this letter, the similarity ETTC 2015– European Test & Telemetry Conference between the signals before and after the Gaussianization approach is evaluated by the traditional concept of correlation coefficient, which is defined as follow:        1 2 2 1 1 N i m i m i xy N N i m i m i i x x y y r x x y y            [6] Where x and y denote the signals before and after the Gaussianization approach, and xm and ym are the mean values of them respectively. The value of rxy is in the range of [-1, +1]. The larger the rxy is, the more similar the two signals are. When rxy is 1, the two signals are exactly the same. When rxy is 0, the two signals are independent from each other. From this point of view, if the rxy is high, it can be considered that most information is kept after the Gaussianization approach. 4. The proposed Gaussianzation scheme The traditional Gaussianzation approaches mentioned in the previous section are all applied in signal processing areas and detection areas so far. To the best of our knowledge, the idea of the Gaussianzation approach has not been applied in the SISO decoding algorithm. In this letter a Gaussianzation approach is proposed to improve the BER performance of the SISO decoder in the coded digital PCM/FM telemetry system with L/D. The idea is based on the following fact. Because of the discriminator in the L/D, the noise in the demodulated signal which is the output of the L/D is non-Gaussian, even if the noise in channel is additive white Gaussian noise. However, the likelihood metrics in the traditional SISO decoding algorithm, such as the Chase decoding algorithm, the belief propagation decoding algorithm, etc, all adopt the Gaussian background assumption. The mismatch between the actual distribution of likelihood metrics and the Gaussian assumption causes the inaccurate likelihood metrics, which results in a poor decoding BER performance. For the sake of clarity, in the following the demodulated signals refer to the output signals of the L/D which are the sum of the useful signal and the non-Gaussian noise. Since it can adjust the amplitudes of the demodulated signals, the Gaussianzation approach is adopted so that the distribution of the demodulated signals becomes closer to Gaussian, resulting in more accurate likelihood metrics and better SISO decoding BER performance. Figure 3 shows the proposed Gaussianization module in the receiver of the coded digital PCM/FM telemetry system. Figure 3: Block diagram of Gaussianization module in the coded digital PCM/FM receiver In the proposed Gaussianization module, the probability distribution of the demodulated signals is approximated by the 2-order GDM model in (5). The EM algorithm is adopted to estimated the parameter group g = [, , ] of the 2-order GDM model. The initial setting of g is crucial to the final results of EM algorithm. The algorithm will converge to local maximum value if the initial setting is inappropriate. In experiments, the following initial setting is appropriate: the initial mean 1 and 2 are set to be the mean of the input signals of the Gaussianization module and the amplitude of the baseband data; the initial variance 1 and 2 are set to be the variance of the input signals of the Gaussianization module and 1; the initial mixture parameter  is initialized by 0.5 and 0.5. After a fixed number of iterations (50) or terminated by the stopping criteria, the EM algorithm gives the estimated parameter group g. The Gaussianizing filter proposed in this letter is the normalized G-filter (NG-filter):   1 1 1 1 ' ' ' | ' ' ' ' M i i i i NG M i i i i s f s g m                                     [7] Where fNG(s|g’) are the outputs of the NG-filter,  (x) is the standard Gaussian cumulative distribution function, -1 (x) is its inverse function, g’=[’, ’, ’] are the estimated 2-order GDM model parameters obtained by the EM algorithm, s are the input signals of the Gaussianization module and m is the amplitude of the baseband data. A normalization term is added as the denominator compared with the G-filter in [9]. Because of the normalization term, the outputs of the NG-filter become the representations of the relative magnitude to the amplitude of the baseband data. Therefore, these outputs can be fed into the SISO decoder as more accurate likelihood metrics. Almost all the computational cost of the proposed Gaussianzation approach concentrates on the EM algorithm. The bigger the number of iterations, the larger the computational cost is. In fact, not a very large number of iterations, for example 50 iterations as adopted in this letter, can produce exact estimations. In practical telemetry system, the Gaussianization module can be realized by hardware or software. It can be an optional module for the performance enhancement in the telemetry system, which is in the place between the L/D and the SISO decoder. 5. Simulation results As mentioned in section III, the correlation coefficient is used to characterize the similarity of the signals before and after the Gaussianization module. In simulation the average value of the correlation coefficient is 0.9879, which means almost all information is kept while the amplitudes have been adjusted. Simulation results are provided in this section to show the improved decoding BER performance of the proposed Gaussianzation approach. The FEC codes adopted in this letter are TPC codes and LDPC codes. The adopted TPC codes are TPC(64, 57)2 and TPC(32, 26)2 , which has ETTC 2015– European Test & Telemetry Conference extended Hamming codes as their component codes [10]. Both the two kinds of TPC codes have been chosen as FEC codes in PCM/FM telemetry system [3]. The adopted LDPC code is (8160, 7136) code, which is a suggested FEC code by the Consultative Committee for Space Data Systems (CCSDS) [11]. The simulation system is built by MATLAB. The carrier frequency of FM is 80 MHz, and the baseband data rate is 10 Mbps. The max frequency deviation coefficient is 0.35. The BER performance comparison in TPC coded PCM/FM system with L/D, as well as the performance of the digital PCM/FM with L/D is presented in Figure 4. (a) (b) Figure 4: Comparison of BER performance with and without Gaussianization approach in TPC coded PCM/FM with limiter-discriminator: (a) TPC(64, 57)2 , (b) TPC(32, 26)2 The SISO decoding algorithm of both two TPC codes in Figure 4 is the Chase II algorithm with 8 iterations [10]. For the BER of 10-4 level, the needed Eb/N0 of the TPC without Gaussianization approach are about 9.8dB (TPC(64, 57)2 ) and 10dB (TPC(32, 26)2 ), while that of the TPC with Gaussianization approach is 9dB(TPC(64, 57)2 ) and 9.5dB (TPC(32, 26)2 ), which yields 0.8dB and 0.5 dB coding gain respectively. The BER performance comparison of the LDPC coded PCM/FM system with and without Gaussianization approach is presented in Figure 5. The SISO decoding algorithm of LDPC code is the minimum-sum algorithm, with alpha being 1.25 and beta being 0. The number of iterations is 50, and the quantization mode is 1-3-4 [11]. For the BER of 10-4 level, the needed Eb/N0 of the LDPC without Gaussianization approach are about 9.6dB, while that of the LDPC with Gaussianization approach is 9.2dB, which yield 0.4dB coding gain. Figure 5: Comparison of BER performance with and without Gaussianization approach in LDPC coded PCM/FM with limiter-discriminator 6. Conclusion In this work, a novel Gaussianization approach is proposed to improve the BER performance of the SISO decoder in the digital PCM/FM telemetry system with L/D. The simulation results show that a coding gain of about 0.8dB at 10-4 BER level has been achieved when the employed FEC code is TPC. The proposed approach can easily extend to digital PCM/FM telemetry system with other kinds of FEC codes which employ SISO decoding algorithms, such as convolution codes and Turbo codes, which is our future work. 7. References [1] R. F. Pawula: “Improved Performance of Coded Digital FM”, IEEE Transactions on Communications, Vol. 47, No. 11, pp. 1701–1708, 1999 [2] David Taggart, Rajendra Kumar, Nick Wagner, Yogi Krikorian, Charles Wang, Neal Elyashar, Mel Cutler, Christine Stevens: “PCM/FM performance enhancement using Reed Solomon channel coding”, IEEE Aerospace Conference Proceedings, pp. 1337–1346, 2003 [3] M. Geoghegon: “Experimental results for PCM/FM, Tier 1 SOQPSK and Tier 2 multi-h CPM with turbo-product codes”, Proc. Int. Telemetry Conf., (Las Vagas, NV), 2003 [4] Lin Wang, Guanrong Chen: “Using LDPC Codes to Enhance the Performance of FM-DCSK”, the 47th IEEE International Midwest Symposium on Circuits and Systems, pp. I-401–I-404, 2004 [5] S. O. Rice: “Noise in FM receiver”, Time Series Analysis, M. Rosenblatt, Ed. New York, Wiley, pp. 395–422, 1963 [6] L. Kouwenhoven, M. Verhoeven and Van Roermund: “A new simple design model for FM demodulators using soft-limiters for click noise suppression”, IEEE International Symposium on Circuits and Systems, pp. 265-268, 1997 ETTC 2015– European Test & Telemetry Conference [7] Yunxin Zhao, Xinhua Zhuang, and Sheu-Jen Ting: “Gaussian mixture density modelling of non-Gaussian source for autoregressive process”, IEEE Transactions on Signal processing, Vol. 43, No. 4, pp. 894–903, 1995 [8] M. Verbout, James M., T. Ludwig, and Alan V. Oppenheim: “Parameter estimation for autoregressive Gaussian-mixture processes: the EMAX algorithm”, IEEE Transactions on Signal processing, Vol. 46, No. 10, pp. 2744–2756, 1998 [9] Wang Pingbo, Cai Zhiming, Liu Feng and Tang Suofu: “G-Filter's Gaussianization function for interference background”, 2010 International Conference on Signal Acquisition and Processing, pp. 76–79, 2010 [10] R. M. Pyndiah: “Near-optimum decoding of product codes: block turbo codes”, IEEE Transactions on Communications, Vol. 46, No. 8, pp. 1003–1010, 1998 [11] Low Density Parity Check Codes For Use In Near-Earth And Deep Space Applications. CCSDS 131.1-O-2, Orange Book, September 2007

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Real time C Band Link Budget Model Calculation - Francisco-M-Fernandez – Airbus Space and Defence - Spain The purpose of this paper is to show the integration of the transmission gain values of a telemetry transmission antenna according to its relative position and integrate them in the C band link budget, in order to obtain an accuracy vision of the link. Once our C band link budget was fully performed to model our link and ready to work in real time with several received values (GPS position, roll, pitch and yaw) from the aircraft and other values from the Ground System (azimuth and elevation of the reception telemetry antenna), it was necessary to avoid a constant value of the transmitter antenna and estimate its values with better accuracy depending of the relative beam angles between the transmitter antenna and receiver antenna. Keeping in mind an aircraft is not a static telecommunication system it was necessary to have a real time value of the transmission gain. In this paper, we will show how to perform a real time link budget (C band).

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015 – European Test & Telemetry Conference Rosetta-Philae RF link, from separation to hibernation C. Dudal, C. Loisel, E. Robert CNES, 18 avenue Edouard Belin 31401 Toulouse Cedex 9, France Emails: clement.dudal@cnes.fr, celine.loisel@cnes.fr, emmanuel.robert@cnes.fr M.A. Fernandez, Y. Richard, G. Guillois Syrlinks, rue des Courtillons, ZAC de Cicé-Blossac, 35170 Bruz, France Emails : miguel.fernandez@syrlinks.com, yves.richard@syrlinks.com, gwenael.guillois@syrlinks.com Abstract: The Rosetta spacecraft reached the vicinity of the comet 67P/Churyumov-Gerasimenko in 2014 and released the lander Philae for an in-situ analysis through ten scientific instruments. The analysis of the lander RF link telemetry reveals major information on the lander behaviour and environment during the 50-hour mission on the comet. Keywords: Radiofrequency, received power, multipath, interferences, Philae, comet 1. Introduction The ESA/CNES/DLR Rosetta spacecraft was launched in March 2004 with objective to reach the comet 67P/Churyumov-Gerasimenko 10 years later. One of its main assignments was to carry out in-situ analysis using Philae, a small lander of about 100 kg equipped with scientific instruments. The S-Band RF link between Rosetta and Philae was, after separation, the only mean of communication with the lander. This paper proposes an analysis of the RF link telemetry during the Separation, Descent and Landing phase (SDL) and during the First Science Sequence after landing (FSS). As the comet landing was epic, the descent and landing are studied in two different parts. A cross-comparison of our analysis is made with the one of other scientific teams to strengthen the raised conclusions. 2. Rosetta - Philae RF link overview The transceiver is a full duplex S-band transmission set for digital data developed specifically for space applications. The conception made by Syrlinks was done with drastic objectives for mass and power consumption. For this, the use of commercial parts was decided leading to a low cost product widely used afterwards on the Myriade platform family. The transceiver is composed of a transmitter, a receiver and a reception filter for dual antenna use (Figure 1). The filter protects the receiver from out-of-band signals, particularly from the transmitter. The two functions (receiver and transmitter) are fully independent and can be activated separately. Technical details are given in the Table 1 and an illustration in Figure 2. Mass 950 g Volume 160 mm x 120 mm x 40 mm Power consumption (28 V power bus) 1.7 W Rx only 6.5 W Rx/Tx at 20°C (1 W RF output power) Temperature Operational: -40°C to +50°C Radiation 10 krad (cumulated doses) Frequency Telecommand link: 2033.2 MHz Telemetry link: 2208 MHz Modulation QPSK Data filtering Differential coding Nyquist half raised cosine filtering (roll-off is 0.35 in Rx, 1 in Tx) Data rate Telecommand ink 16384 bps Telemetry link 16000 bps Rx sensitivity range -50/-120 dBm Channel coding Tx; convolutional coding (L=7, R=1/2) Rx: Viterbi soft decision decoding Electrical interfaces RS 485 and CMOS Table 1: Transceiver technical details There are two transceivers on both sides of the RF link. The redundancy is activated with RF switches on orbiter side (1 Tx/1 Rx active) and with diplexer on lander side (1 Tx/2 Rx active). Figure 1: Rosetta-Philae bidirectionnal RF link The choice of implementing identical RF chains for transmission and reception on the orbiter and the lander has given great advantages, such as cutting procurement ETTC 2015 – European Test & Telemetry Conference costs and simplifying qualification, integration and testing. With 1W RF output power and 1 dBi gain (@ 60°) patch antennas, link establishment is possible for distances up to 150 km. The lander telecommunication system answers to a request-to-send protocol from the orbiter at any time. This handshake protocol, which implies full duplex equipment and which was specifically designed for Rosetta mission ensures a desired quality of transmission even when the relative geometry and visibility between the orbiter and the lander is not favourable. Figure 2: Rosetta ISL transceiver In the housekeeping telemetry available at orbiter side, one parameter is particularly interesting to get information beyond its intrinsic value: the Received Signal Strength Indicator (RSSI). From this raw telemetry value, it is possible to extract the received power level on orbiter side, which can be then processed as shown in the following parts. 3. Separation and Descent Before separation, several milestones had to be respected in order to deliver the lander in the optimal conditions. Despite some late complications on the battery heating, the separation of the lander occurred nominally at 8h35 UTC. Before being able to establish the RF link, the Rosetta spacecraft had to maneuver to point its Inter Satellite Link (ISL) antennas toward the lander, leading to an AOS roughly 2 hours after separation. The link was established within 5 min, as expected, and lasted during the 5 remaining hours of descent. It has allowed the transmission of the CIVA (Comet Infrared and Visible Analyzer) and ROLIS (Rosetta Lander Imaging System) photographs taken before touchdown. As expected with increasing distance between the lander and the orbiter, the RSSI decreases over time (Figure 3). Figure 3: RSSI level during the descent toward the comet Independently of the global decreasing level, low frequency oscillations are noticeable on the RSSI, of an average duration of 59 min (orange lines on the Figure 3). It could be explained with a multipath effect on the orbiter structure. 4. Landing The touchdown on the chosen landing site Agilkia was expected at 15h34m10 UTC and occurred with a precision of a few seconds. After touchdown, despite the announced activation success of the anchoring system, the received RF link telemetry indicated multiple and regular interruptions during two hours. The investigation carried out helped to establish the failure of the anchoring system, causing Philae to rebound on the comet surface. There were two rebounds before stabilization on the ground; they are studied from a RF point of view below. First rebound After first touchdown (TD1), the lander rebounded and moved away from the landing site. According to other instrument teams, it tumbled around the 3 axis keeping a relatively stable position around Z-axis. The rebound lasted roughly 2 hours. During this period, the RF link suffered multiple and periodic interruptions. The estimated RSSI on telemetry link presents high and fast variations in the range -80 dBm/-120 dBm reaching the limit of the Rx sensitivity range and leading to those link interruptions. Figure 4: RSSI level during first rebound At 16h20 UTC being 45 min after touchdown, a change is observed on the RSSI profile (Figure 4). The maximum measured value is roughly twice higher. The lander motion is directly responsible for this change and particularly the spin around the Z axis created during the separation. An analysis in the frequency domain provides more information on any periodical phenomenon like a spin motion and its value. The orbiter telemetry is sampled at Fs = 0.1 Hz (1 sample every 10 s). According to Shannon theorem, the maximum frequency for a possible well-sampled phenomenon is 0.05 Hz (Period of 20 s). For faster phenomena, aliasing occurs and the frequency of the phenomenon must be interpreted according to the sampling frequency Fs. For a periodic phenomenon at a frequency F higher than Fs/2, the observed frequency will be Fo = Fs - F. TD1 15:34 TD2 17:25 16:20 ETTC 2015 – European Test & Telemetry Conference The Fourier Transform of the RSSI (averaged periodogram with Nfft = 128) is computed and analyzed over the period of the first rebound (Figure 5).  Before 16h20, a periodic phenomenon is identified at a frequency of 0.0194 Hz (period = 51.5 s). If this measure corresponds actually to a faster phenomenon and is biased with aliasing, taking into account the sampling frequency, the real frequency would be 0.0806 Hz (period = 12.41 s).  After 16h20, a periodic phenomenon at a frequency of 0.044 Hz (period = 22.7 s) is identified. If this measure corresponds actually to a faster phenomenon and is biased with aliasing, taking into account the sampling frequency, the real frequency would be 0.056 Hz (period = 17.86 s). Figure 5: RSSI spectrum during first rebound, before and after 16h20 In order to solve the ambiguity on the aliasing effect, a cross-comparison is made with ROMAP (Rosetta Magnetometer and Plasma-monitor) team analysis. According to them, at 16h20, the lander may have collided with a surface feature which had for effect to slow the lander spin around its Z-axis, changing from 13 s per rotation to 24 s per rotation. It probably also reduced the lander tumbling, decreasing the dispersion around the spin axis. The lander antennas were then better pointed leading to a better signal reception on orbiter side. Timeline ROMAP RF analysis Before 16h20 Lander spin at 13 s/rot Phenomenon of period 12.41 s detected After 16:20 Lander spin at 24 s/rot Phenomenon of period of 22.7 s detected Table 2: Comparison of ROMAP and RF analysis This allows concluding that the periodical phenomenon observed before 16h20 is actually biased with aliasing and represents the lander spin at 13 s per rotation. The spin decrease at 16h20 is perfectly visible in the analysis. Second rebound The second touchdown (TD2) occurred at 17:25:25 UTC and the lander did not stabilized on the ground leading to a second rebound of 5 min 50 s during which the RF link still suffered from interruptions but with two significant stable periods (see Figure 6). The presence of stable RF link periods gives indication of a favorable trend toward lander stabilization. The instability may be due to masking of the lander antennas or multipath interference as the lander was still in motion. Figure 6: RSSI level during second rebound After second rebound The final stabilization on the comet ground occurred at 17:31:17 UTC being only 30 min before the end of the RF visibility. Typical instability is thus observed before the complete loss of the link (Figure 7). The RSSI frequency analysis does not bring much information as the lander is now stable on the ground. Nevertheless, the maximum measured level shows that a high attenuation affects the link compared to previous transmissions. This might be due to the actual lander attitude on the ground and its surrounding environment. The proximity of the lander stabilization time with the end of the visibility window does not allow concluding properly on the lander situation. Figure 7: RSSI after stabilization on the ground However, the analysis of the Lander antennas temperature gives precious information on the lander illumination during the rebound phase (Figure 8). The changes in illumination, particularly after stabilization on the ground, indicate that the lander is in a dark place with very low solar flux leading to a fast decrease of the antennas temperature. Figure 8: Lander antennas temperature from first touchdown up to end of RF visibility window Periodical phenomenon Freq = 0.0194 Hz Periodical phenomenon Freq = 0.044 Hz Stable link established TD2 Link breaks / instability TD3 Lander in the air Antennas illuminated Antennas partially illuminated Lander on the ground Antennas not illuminated TD2 TD1 Link breaks / instability End of window ETTC 2015 – European Test & Telemetry Conference At the end of the RF visibility window, after the stabilization on the ground, two crucial questions remained unanswered: what is the final location and attitude of the lander? When the RF link will be established again to start the First Science Sequence? 5. First Science Sequence After separation, descent and landing, the science operations were planned in two phases depending on the power supply source:  The First Science Sequence (FSS) began at lander touchdown and ended when primary and secondary batteries were empty. The foreseen lifetime was about 50 hours.  The Long Term Science (LTS) will begin at the 1st battery recharging and last till the end of the lander mission (thermic and power supply problematic) Due to the unexpected landing circumstances, there was an uncertainty on the RF link re-establishment. Fortunately, the orbiter detected a signal from the lander at the predicted time for the nominal landing site. Despite the observed instability at the beginning, the FSS could start. During the FSS, there were four RF visibilities. The overall duration of each visibility (including unstable and stable periods) decreased over time but the stability duration increased over time. Table 3: RF visibilities characteristics during FSS The maximum measured power depends on the orbiter and lander attitudes and on the distance between them. The distance has increased continuously during the FSS which led to a higher free space path loss and a lower received power. The levels are nonetheless in a nominal range considering the simulated link budget and demonstrate a nominal transmission. The RSSI profiles during the fourth visibilities are given on Figure 9. They show the distribution between instability and stability periods. During the stable period, arches are noticeable. They reflect the multipath interferences due to the surrounding relief. Figure 9: RSSI profiles during the four RF visibilities - first to last The first and fourth visibilities present the same profile with a decreasing peak power level during the stable link duration. Inversely, the second and third present an increasing peak power profile. The orbiter trajectories with regard to the lander position on the comet were then probably similar for first and fourth, and for second and third. This conclusion is corroborated by the simulations made by the CNES navigation team on the probable azimuth and elevation angles of the lander antenna during the FSS. Figure 10: Lander antenna azimuth and elevation angles simulation during FSS Parameter Visi 1 Visi 2 Visi 3 Visi 4 Date (jj/mm) 13/11 13/11 14/11 14/11 Whole link duration (hh:mm) 03:57 03:42 02:48 02:22 Stable link duration (hh:mm) 02:43 02:36 02:46 02:09 Stability ratio (%) 69% 71% 99% 91% Maximum received power on orbiter side (dBm) -89 -91 -94 -94 Instability Stable link End of visibility window Instability Stable link End of visibility window Stable link End of visibility window Instability Stable link End of visibility window ETTC 2015 – European Test & Telemetry Conference Interference analysis The previous level profiles show arches visible on the Figure 9. As previously explained they are due to multipath interferences. During the first visibility, the arches are discernible and allow working on a model. When a radio signal is transmitted with a direct and a reflected component, the attenuation due to the reflection interference depends on the reflection angle θ, the distance d between the antennas and the reflecting surface, the reflection coefficient of the surface and the wavelength of the signal. Figure 11: Wave reflection model As the comet is in rotation and considering that the orbiter is motionless, the elevation angle θ is changing over time according to the comet period (12.4 h). For two values of distance between antennas and the reflecting surface (40 cm and 1 m) and for a reflection coefficient taken equal to 1, the following figure gives the profile of the attenuation expectable on the direct signal. Figure 12: Variable attenuation model for reflecting surface at 1 m and 0.4 m Customizing the distance (41.5 cm) to fit the model with the measured received power, the following figure (Figure 13) is obtained. The simple model used for the multipath interferences does not allow making highly reliable conclusions. What can be said is that there is probably a reflecting surface situated around 40 cm of the antennas. This implies that the lander attitude is not optimal and it is oriented toward some rocks. Figure 13: Comparison of attenuation model with RSSI The model could be sharpened if the nature of the relief was known and by taking into account the orbiter speed relatively to the comet and the lander attitude. As visible on the last three visibilities, complex interference phenomena are at stake with multiple reflected signal combined with diffraction and possibly no direct signal. 7. Conclusion The RF link analysis of the first mission having landed a space science laboratory on a comet allowed understanding the successive events in the particular context of the rebound landing and the unknown final position. The lander spin during the rebound phase could be determined thanks to a frequency analysis of the received power on orbiter side. When the lander has been stabilized on the ground, the RF link could be established at each orbiter-lander visibility. A propagation model derived from the power variations offers clues on the final lander attitude and position with regard to its environment. The RF link behaved nominally when established and has played its part at best despite the difficult conditions, from the separation with the orbiter to the final hibernation on 67P/Churyumov-Gerasimenko. 9. Glossary CIVA: Comet Infrared and Visible Analyzer FSS: First Science Sequence RF: Radio Frequency ROLIS: Rosetta Lander Imaging System ROMAP: Rosetta Magnetometer and Plasma-monitor RSSI: Received Signal Strength Indicator SDL: Separation Descent and Landing TD: Touchdown 0 20 40 60 80 100 120 140 -70 -60 -50 -40 -30 -20 -10 0 10 Time (min) Receivedlevel(dBm-arbitraryreference) Received level evolution with a perfectly reflecting wall and a cometary period of 12.4 h Wall at 40 cm Wall at 1 m 0 2000 4000 6000 8000 10000 12000 -180 -170 -160 -150 -140 -130 -120 -110 -100 -90 -80 Time (s) Receivedlevel(dBm-arbitraryreference) Received level evolution with a perfectly reflecting wall at 41.5 cm and a comet period of 12.4 h Received power Attenuation model d θ Lander Direct wave Reflected wave

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015 – European Test & Telemetry Conference JASON3, a story of TT&C interference handling Céline LOISEL, Gérard ZAOUCHE, Michel LE DU Centre National d’Etudes Spatiales 18 avenue Edouard Belin 31401 Toulouse Cedex 9 (France) Emails: celine.loisel@cnes.fr;gerard.zaouche@cnes.fr;michel.ledu@cnes.fr Abstract: This paper describes the methodology and the results of the interferences analysis that the JASON3 spacecraft has to deal with, as part of the PROTEUS platform series, sharing frequencies, modulation schemes and ground network. Keywords: Interference, S-band, PROTEUS, JASON3, Frequency, TT&C 1. Introduction The joint CNES-NASA-NOAA-EUMETSAT JASON3 mission is based on a PROTEUS satellite platform and then uses its inherited TT&C S-Band sub-system. Thanks to its instruments including the POSEIDON altimeter, JASON3 is able to determine the height of the oceans and thus to contribute to the weather forecast, and to climatology. 2. JASON3 characteristics 2.1 Orbit and attitude JASON3 will orbit at an altitude of roughly 1300 km with a near circular orbit at 66° of inclination. JASON3 attitude keeping is geocentric, with +Zs axis (payload pointing axis) being NADIR oriented toward the velocity vector (+Ys). Figure 1. JASON3 Measurement System 2.2 TT&C subsystem As for the other PROTEUS missions, the JASON3 TT&C system is composed of two S-band transceivers in redundancy both connected to two identical large aperture antennae. In order to offer the largest coverage (objective is 4 steradian), the antennae are operating in opposite circular polarization (left and right) and are pointing in opposite directions on the Zs axis as shown in Figure 2. Figure 2. JASON3 attitude during nominal operations 2.3 TT&C frequencies Three frequency couples (TM/TC) have been allocated to the PROTEUS satellites in the ITU assignated bands: 2200 MHz to 2290 MHz for the TM link and 2025 MHz to 2110 MHz for the TC link. Each frequency couple could be used by several PROTEUS satellites as seen in Table 1. Table 1. PROTEUS frequencies and orbits 2.4 S-Band ground network During nominal operations, the JASON3 ground network is composed of four earth stations: o USG TTCET from Eumetsat network o WPS, FBK and BRW stations from NOAA network Calipso Jason2 Jason1 Jason3 SMOS Corot TC frequency (MHz) 2101.71 TM frequency (MHz) 2282.4 Orbit Circular, heliosynchrono us i = 98.2° Circular, heliosynchrono us i = 98.4° Circular, polar Altitude (km) H = 705 H = 755 H = 896 2040.493 2215.92 Circular with high inclination i = 66° H= 1336 2088.878 2268.465 JASON3 RHCP LHCP LHCP (Cross-polar) RHCP (Cross-polar) Nadir V +Zs ETTC 2015 – European Test & Telemetry Conference During LEOP (Low Earth Orbit Operations) and SHM (Safe Hold Mode) situations, the network is complemented with stations of the CNES 2 GHz network (HBK, KRU, KER, AUS). 2. JASON3, SMOS & JASON1 conflicts 2.1 Context JASON3 transceiver is configured with the same frequency couple as JASON1 and SMOS leading to potential RF conflicts both in TC and TM links. During early phase of JASON3 development, it was already known that at the JASON3 launch horizon in 2015, JASON1 will be deorbited, cancelling all interferences cases. By against, as it was done for JASON2/CALIPSO and JASON1/SMOS previous pairs, JASON3/SMOS conflicts have to be studied. Both spacecrafts have indeed the same TM/TC frequencies, the same modulation and data rates and their transmitters are always transmitting. SMOS is using the same 2 GHz network of ground stations plus TTCET ground stations. There are thus some overlapping areas during spacecraft visibilities periods. The risk is emphasized during LEOP by the fact that additional ground station sites may be used. Table 2. Ground network characteristics and interference risk Due to their small antenna diameter and then to their wider beam, the interference risk is higher over TTCET and BRW ground stations as seen in Table 2, the angular isolation between satellites being too low to avoid any disturbance on the link budget. The following cases have thus emerged with a potential risk: o Interfering over the TTCET stations (SMOS operations over AUS/KRN and JASON3 operations over USG) o Interfering over the BRW station by SMOS telemetry signal on the JASON3 TM link. 2.2 Methodology and results A link budget analysis has been done for conflict cases with a 5° separation between both spacecrafts taking into account the following points: o Ground antenna gain isolation o Distance isolation (Propagation losses) o On-board antenna gain and polarization isolation o Nominal or SHM/LEOP operations The link budgets with and without interferer have been computed taking into account the three following sizing geometrical cases as shown on Figure 3. o 5° angular separation at low elevation (case 1), o 5° separation at high elevation (case 2), o 90° separation, one satellite being at zenith, the other being at low elevation (case 3). Figure 3. Sizing geometrical cases For nominal operations, JASON3 attitude implies that during all pass duration, the main polarization seen from the ground is LHCP, whereas for SMOS it may be the LHCP or the RHCP one (Figure 4) Figure 4. SMOS attitude during nominal operations On the other side, during SHM and LEOP operations, PROTEUS spacecrafts are spinning around the Zs axis leading to a change of the onboard antenna seen from the ground and thus of the polarization. After analysis, the following conclusions have been issued: Table 3. JASON3/SMOS interference conclusions WCDA FCDA Barrow Antenna diameter (m) 13 13 5 3.1 12 θ3dB (°) <1 <1 1,76 3,2 <1 Spatial isolation (dB) C/I @ 5° (off-angle) >>25 >>25 20 <20 >>25 Conflict risk No risk No risk Risk Risk No risk Station performances NOAA Network TTCET HOMERE (KRU, HBK, KER, AUS) Nom / Nom SHM / Nom Nom / SHM SHM / SHM JASON3 JAMMER Eb/No small degradation when both satellites at zenithal position Eb/No small degradation when both satellites at zenithal position No RF conflict risk No RF conflict risk SMOS JAMMER No RF conflict risk No RF conflict risk No RF conflict risk No RF conflict risk BARROW LEO-T SMOS JAMMER No RF conflict risk No RF conflict risk No RF conflict risk The SMOS jamming signal may be enough to lock the JASON3 ground receiver JASON3 / SMOS TM LINK TTCET Nom / Nom SHM / Nom Nom / SHM SHM / SHM SMOS JAMMER No RF conflict risk No RF conflict risk No RF conflict risk No RF conflict risk JASON3 / SMOS TC LINK Case 1 : TC link establishment No RF conflict risk Case 1 : TC link establishment Possible RF conflict risk Case 2 : High elevation phase No RF conflict risk TTCET Case 3 : separation of 90° between spacecrafts No RF conflict risk JASON3 JAMMER 5° 5°90° ETTC 2015 – European Test & Telemetry Conference There is no serious risk of conflict for the TM link: o Some low Eb/No degradation (1.5 dB) on the SMOS TM link by the JASON3 signal over KRN and AUS TTCET (but no TM loss) may occur. o The JASON3 TM link over BRW may be disturbed by SMOS signal when both satellites are in SHM (very seldom case) Concerning the uplink, the SMOS TC link, when in SHM mode may be difficult to establish in presence of the JASON3 interfering signal: the link establishment delays could be longer, but the perturbation would not have any consequences on the pass follow-up apart a false RF lock on the interfering signal when establishing the TC link.. A simulation with STK software was run over a 1 year period (2013/07/01 – 2014/07/01) to establish the conflict statistics. The number of conflicts where SMOS and JASON3 are separated by less than 5° over the previous ground stations was computed (Table 4) (*) Considering passes with at least 8 min of visibility with elevation > 5° for JASON3 and 6 min for SMOS Table 4. JASON3/SMOS conflicts statistics The number of expected conflicts being low (statistics), it has been recommended to filter the passes with a delta angle less than 5° in order to avoid any telemetry loss or any bad lock on the TC link. This kind of operational filtering is already achieved on the in-flight PROTEUS spacecrafts, the same instructions will thus be applied to JASON3. 3. Tandem flight with JASON2 JASON2 was launched in June 2008 and has the same mission/architecture than JASON3. During the first months of JASON3 in orbit (up to 6 months), JASON2 and JASON3 spacecrafts will fly on formation on the same orbit separated each other by 1 to 10 minutes in order to make instruments inter-calibration. Then they will be equi-located on the orbit. JASON3 and JASON2 have different TM/TC frequencies but during the tandem flight, they will use co-localized ground stations (50m) for USG and WPS sites. Potential RF interference cases may thus happen over these two ground stations. An isolation budget analysis has been done to conclude on the risk of interference during this tandem phase: o SPATIAL ISOLATION :  Ground antenna gain isolation - GSI  Range isolation (Propagation losses) - RI  On-board antenna gain isolation – OBSI o SPECTRAL ISOLATION :  ΔFrequency  Transmission mask  Filtering Considering the spatial isolation budgets, the following hypotheses have been taken into consideration: o Only Nominal mode for spacecrafts o Same station location for the TC link o ITU antenna pattern mask for off-axis losses o JASON antenna pattern o JASON3 behind JASON2 The following table (Table 5) summarizes the computed spatial isolation budget: Table 5. Spatial isolation budgets (dB) Thanks to their large antenna, the worst case of spatial isolation for the CDA stations is 15 dB higher than for the TTCET stations. For the spectral isolation budget, it is necessary to look at the transmitted spectrums and their occupied bandwidth and at the filtering done at reception: Table 6. JASON3 RF waveforms The following table (Table 7) summarizes the different isolation budget items: TMlink TC link Modulation Not-filtered QPSK BPSK/PM, subcarrier @ 16kHz Coding RS+CV (7;1/2) No coding Data Rate 838861 kb/s (with RS) 4 kb/s Occupied Bandwidth 99% of the power in 2Rs 100 kHz @-20 dBc Worst Case Spatial Isolation (dB) TM TC Usingen (TTCET) JASON2 Interfering on JASON3 11.7 12.3 JASON3 Interfering on JASON2 12.3 Wallops (CDA) JASON2 Interfering on JASON3 27.9 27.2 JASON3 Interfering on JASON2 28 ETTC 2015 – European Test & Telemetry Conference Table 7. JASON3/JASON2 isolation budgets The isolation budgets are thus very high: C/I > 80 dB even for TTCET where the spatial isolation would not be enough to prevent from a conflict (C/I ~ 12 dB) So no interference between both spacecrafts either for USG or for WPS sites is expected. But if the risk of interferences for the data link is very low, the tracking sub-system may have been disturbed. Indeed as both satellites signals may be received in the main antenna lobe, confirmation that the tracking receiver is not going to lock on the interfering signal has to be assessed. For the TTCET network stations, as they are working on ephemerides, and not by means of “autotrack” systems, there is no risk of lock of the nominal antenna on the interferent signal. The WPS station can operate either on ephemerides or in "autotrack", but in this case, thanks to its big antenna diameter (13m or 14m), and thus of its small beamwidth at 3 dB, the spatial isolation (> 25 dB) between both signals will ensure a “zero” risk of interference. On the other hand, some RF tests done in lab environment have confirmed that the presence of both signals (nominal and interferent) in the reception chain (LNA, down- conversion..) of onboard but also of ground equipment, is not going to generate intermodulation products and/or saturation levels which could disturb the link quality. 4. No TM at separation risk Interference mitigation being one of the JASON3 leitmotiv because of its last in the family status, it has to face a last interference case on the launch pad because of the SpaceX Falcon9 launcher transmitter characteristics. Indeed, at separation, the JASON3 TT&C link is established for the first time with the station in visibility; only telemetry service is achieved during this first pass. As a reminder, JASON3 TT&C frequencies are the following: o JASON3 TC link : 2040.493 MHz o JASON3 TM link : 2215.92 MHZ Concerning the launcher, only the second stage of the rocket is still present at separation. Thus according to Falcon9 figures two TX frequencies may potentially disturb JASON3 TM link or vice-versa : S2TX1 and S2TX2 (Figure 5) Concerning the S2TX2 transmitter, the delta frequency with the JASON3 TM frequency being about 35 MHz and the Falcon9 signal bandwidth being 3 MHz, the frequency isolation is big enough to avoid any interference between both signals. The potential disturbance may come from the S2TX1 Falcon9 signal whose frequency is about 2 MHz away from the JASON3 TM frequency. Moreover, as the Falcon9 signal has a width bandwidth of 3 MHz compared to the JASON3 QPSK spectrum at low rate (about 800 kHz), both signals are overlapping. The interference cases shall thus be investigated. Figure 5. JASON3/Falcon9 frequencies Several potential anomalies could occur due to this interference context: o Bad antenna tracking o LNA saturation and/or intermodulation o Receiver lock disturbance o Noise addition in the reception chain 4.1. SPATIAL ISOLATION BUDGET As this potential case of interference will occur during JASON3 spacecraft separation, the spatial isolation is very low. At the beginning of operations, JASON3 and Falcon9 are still attached, and then following the separation, they will slowly move away at a speed of about 0,4m/s leading to a few hundredth of degree separation after 5 min. The separation will happen over HBK ground station which uses a 12 meters antenna. The 3 dB bandwidth lobe is 0.75° wide. Thus no isolation from the antenna could be expected. Both signals will be received in the main lobe of the antenna and so with the same gain. 4.2. SPECTRAL ISOLATION BUDGET Both signals may be superposed as seen on Figure 6. In- band isolation will then depends on the delta of EIRP between both signals; JASON3 EIRP is between - 4.2 dBW and 9 dBW depending on the antenna gain in direction of the station. Falcon9 transmitter computed EIRP is 41 dBm, leading to a delta of power between both signals of 2 dB to 15 dB. ETTC 2015 – European Test & Telemetry Conference Figure 6. JASON3 and Falcon9 TM spectrums Concerning ground isolation, the in-band interfering signal being only spaced from the nominal one by 2 MHz, it is not filtered by the different input stages of the receiver. No isolation can be considered between both signals at receiver level. In order to characterize the receiver behavior in this interference case, performances have been measured with a lab test setup. Both JASON3 and Falcon9 signals are generated with lab means, the delta of power between both signals is set up between 2 and 15 dB. Tests are done at several frequency shifts between both signals in order to cope with the interfering spectrum bandwidth uncertainty. We observe during these tests different kind of interference issues on our telemetry link: o Unlock of the JASON3 TM signal as soon as the interfering signal is received. o Degradation of the TM link performances (Eb/No, BER) in presence of the interfering signal (Figure 7) o Impossibility to lock the receiver on the nominal signal in presence of the interfering signal o Lock of the nominal JASON3 receiver on a bad frequency eventually leading to no frame synchronization. Figure 7. Eb/No degradation on the JASON3 TM link The gravity of the disturbance depends principally of the position of the QPSK JASON3 signal according to the interfering signal bandwidth. Indeed, in the theoretical case, the JASON3 signal is in a low amplitude lobe of the interfering signal, this case being also not so unfavourable. But due to the uncertainty in the Falcon9 spectrum (filtering, exact bandwidth), the JASON3 signal may be more or less disturbed by the interferer. Following these tests, the risk of interference between the Falcon9 telemetry signal from S2TX1 and the JASON3 transmitter has been proved. There was thus a non-null probability to have no spacecraft telemetry at separation during the first pass over HBK. The contingency case NO TM at separation shall in this case be addressed with the process of specific operations to check the satellite status. Of course this kind of procedure has to be avoided if possible because it comes with a lot of stressful operations. It was thus decided to ask SpaceX if they could change their transmitter frequencies to cancel any risk. Based on the previous analysis and conclusions, SpaceX offered to switch the first and second stage transmitters leading to a zero risk of interference at separation. 5. Conclusion The analysis conducted in this paper shows us that spectral bandwidth sharing is now a challenge in the S- band, especially in a spacecraft family where frequencies, rates, modulation and ground networks are shared. JASON3, as part and last of the PROTEUS family hasn’t escaped to multiple interference studies and experimentations. Happily, the documented cases are seldom and may be handled with simple operational rules like pass filtering according to spacecraft preference. By proceeding this way, this ensures a risk reduction with family’s spacecrafts but unfortunately not with the hundred ones sharing the same S-band frequency slot. S-band frequencies may also be shared with launcher transmitters; in this case the coordination is crucial because operations at separation and during first pass are really critical. Some minutes of lost telemetry may lead to tough recovery operations and thus stress to the teams. Hopefully in the JASON3/Falcon9 case, it has been easy to find an alternative solution to avoid any risk. This shows the importance of analysis and coordination like the ones presented in that paper to use for a long time the vital TT&C S-band spectrum resource. The JASON3 launch is foreseen end july 2015, the 6 first months of in- orbit life will allow us to confirm the status of the operations and the interference risk management. ETTC 2015 – European Test & Telemetry Conference 6. Glossary TT&C: Telemetry, Tracking and Control TM: Telemetry TC: Telecommand LHCP: Left Hand Circular Polarization RHCP: Right Hand Circular Polarization RF: Radio Frequency TTCET: Telemetry TeleCommand Earth Terminal CDA: Command and Data Acquisition Station AUS: Aussaguel, France BRW: Barrow, Alaska FBK: Fairbanks, Alaska HBK: Hartebeeshoek, South Afrika KER: Kerguelen, French Southern and Antartic Lands KRN: Kiruna, Sweden KRU: Kourou, French Guyana USG: Usingen, Germany WPS: Wallops, Virginia STK: System Tool Kit EIRP: Equivalent Isotropic Radiated Power BER: Bit Error Rate LNA: Low Noise Amplifier

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Wavelet and source coding on Ariane 5 telemetry data Didier SCHOTT Astrium Space Transportation 66 route de Verneuil, 78133 Les Mureaux Cedex - France Abstract: In ETTC 2013 and ETC 2014 we have presented our work about the use of source coding techniques for 1553 and measurement data. This paper will now present another one of these studies: the combination of wavelet and source coding techniques on measurement data. The paper will be in three parts. First we will present the wavelets we shall use with Ariane 5 data. In a second step we will describe the principal existing source coding techniques. In the third part we will present the different processing combination we have tested, and compare the efficiency of these combinations. Keywords: Ariane 5, Telemetry, Source Coding, Wavelet 1. Introduction The data rate for Ariane 5 main telemetry system is historically limited to 1 Mbit/s. Within the framework of a CNES Launcher R&T project we have explored several tracks in order to increase the volume of data available in the telemetry link. This paper will present one of these studies: the combined use of wavelet and source coding techniques on telemetry data. 2. Source coding for Ariane 5 2.1 Ariane 5 telemetry system The telemetry system is not only used during validation phases (qualification flights), but also for “Commercial” flights. The system acquires the information of about 600 analogic sensors and 230 status, and spies the two avionic 1553 functional buses. It also manages an on-board memory which is used to record data for later transmission, for example when the launcher is not in the line of sight of a ground station. The data are multiplexed by a “Central Telemetry Unit” and sent to ground in a CCSDS frame format structure. The telemetry system is the only link with the launcher we have! Its function is to give information to the ground, in real-time and for post flight exploitation, about: - The launcher behaviour - The mechanical / thermal /... environment of the flight - Potential Anomalies and their localisation - The trajectory, to predict payload orbits Qualification flights are equipped with additional instrumentation (about 600 sensors). These additional data will permit the updating of the calculation models and the validation of the dimensioning assumptions. 2.2 Source Coding Need In the beginning of the flight the propulsion phase in the atmosphere generates a lot of vibration effects, meaning high volume of data to be transmitted to ground. As the distance grows this data rate has to be reduced in order to guaranty the link budget. In some phases the launcher is not in visibility of the ground stations, the data are recorded to be transmitted later. We see there the potential advantages of Source Coding: - Increasing the quantity of data to be transmitted in the limited rate, - Ameliorating the link budget by reducing the telemetry data rate, - Limit the on-board memory size and speed up the restitution of recorded information. 2.3 Source coding main Requirements The quality of the data is essential. All source coding techniques which deteriorate data (“lossy” techniques) are excluded. - Req 1: lossless algorithms The coded data will have to be transmitted in autonomous packets (the content of a packet shall not depend of information in a previous packet). They shall contain all information mandatory to build the original data and their time-tags, with the same accuracy than non-coded parameters. - Req 2: autonomous packets - Req 3: same accuracy for time-tagging 3. Wavelet trade-off 3.1 Why using wavelets? In our previous studies ([1], [2]) the principle we follow was to accumulate data during a limited time, then apply to this bloc of data a source coding algorithm. This principle introduces additional transmission delays due to the corresponding on-board computing. ETTC 2015– European Test & Telemetry Conference The purpose of this new study was to explore other source coding techniques to transmit rapidly some “rough” information, and then in a second step transmit additional details in order to have the needed accuracy. This kind of approach is used for images compression, as in JPEG 2000 standard ([3]): Image Pre-Processing Wavelet Transformation Source CodingFormattingOutput Figure 1: JPEG 2000 processing blocks After a first pre-processing step (essentially color transformation from RGB to YUV representation), the image is processed in a wavelet stage: decomposition and quantification. The quantized wavelet coefficients are then compressed with an entropic algorithm, and the coded stream formatted to be output. The output contains different image resolution; a low resolution image can be obtained rapidly and additional details in further steps. It’s the wavelet transformation which permits this multi- resolution decomposition. So we decide to study how to apply this principle to telemetry data. 3.2 What kind of wavelets? The discrete wavelet transformation shall be totally reversible. The quantization is then not acceptable: we have to find wavelets that outputs Integer coefficients. Some studies have been done in this field ([4]). We have identified two potential candidates: Haar wavelet (simple, based on mean values and differences) and Delaurier- Dubuc wavelet ([5]: efficient for audio signals). The JPEG 2000 standard also proposes a reversible wavelet: LeGall wavelet. We will apply a recursive decomposition on our discrete signals, grouped by 2^n samples. At each decomposition level with m input points we will obtain m/2 Wavelet Coefficients (“Details” Dx) and m/2 Scaling Factors (“Rough” data Sx). Figure 2: wavelet decomposition with 8 samples For calculation we will use the “lifting scheme” approach, detailed in [4]. This approach permits to calculate Scaling factors and Wavelet coefficients with simple linear combinations. Example with Harr wavelet: - Wavelet function: Dn,l = Sn-1,2l – Sn-1,2l-1 - Scaling function: Sn,l = (Sn-1,2l + Sn-1,2l-1) / 2 This scaling function is to be transformed into an integer value by expressing it with a truncated value of Dn,l: - Sn,l = Sn-1,2l-1 + Trunc(Dn,l /2) The chart hereafter shows the calculation steps: Input S0 D1 S1 D2 S2 D3 S3 Output 185 184 184 184 184 185 185 182 -3 185 1 186 2 2 185 185 187 186 1 184 -1 184 -3 ‐3 187 187 ‐3 186 -1 ‐1 181 184 ‐1 188 7 7+0,5* It is also simple to reconstruct the initial data from the last level to the first: - Sn,2l-1 = Sn+1,l -Trunc(Dn+1,l /2) - Sn,2l = Sn,2l-1 + Dn+1,l For LeGall algorithm the wavelet and scaling functions are defined by the equations: - Dn,l = Sn-1,2l -Trunc((Sn-1,2l-1 + Sn-1,2l+1)/2) - Sn,l = Sn-1,2l-1 - Trunc((Dn,l-1 + Dn,l+ 2) /4) And for Delaurier-Dubuc wavelet: - Dn,l = Sn-1,2l -Trunc( 9/16*(Sn-1,2l-1 + Sn-1,2l+1) - 1/16*(Sn-1,2l-3+Sn-1,2l+3)+0,5) - Sn,l = Sn-1,2l-1 -Trunc( 1/4*(Dn,l-1 + Dn,l+1)+0,5) The chart hereafter compares the 3 outputs with 8 input points and “border effect” management for LeGall and DelaurierDubuc wavelets (first and last samples can’t be calculated with the same equations): Input N1‐LG N1‐DD N2‐LG N2‐DD N3‐LG N3‐DD Output‐LG Output‐DD Output‐Haar 185 184 184 184 184 186 186 186 186 185 182 -3 -3 -1 -1 3 3 3 3 2 185 185 185 187 187 ‐1 ‐1 1 184 -2 -2 -4 -3 ‐4 ‐3 ‐3 187 187 187 ‐3 ‐3 ‐3 186 2 2 ‐2 ‐2 ‐1 181 183 184 2 2 ‐1 188 7 8 7 8 7 3.3 Wavelet algorithms comparison Information theory permits to evaluate the quantity of information sent by a source. If pi is the probability of appearance of the message i, the mean quantity of information (or Entropy) of the source is given by the relation:  i ii ppH )log( ETTC 2015– European Test & Telemetry Conference The purpose of the wavelet transformation is to diminish the signal entropy, in order to increase the efficiency of the data compression step. We will compare the entropy of the wavelet transformed signals. For this we use the telemetry data of the beginning of L549 flight (1/10/2009). First step - 16 samples blocks, 1 to 4 decomposition levels. The chart hereafter gives the amount of parameter for which the entropy is lower: Lvl 1 Lvl 2 Lvl 3 Lvl 4 LeGall 20,34% 28,14% 35,93% 39,66% Haar 15,93% 27,80% 34,24% 38,31% DD 19,32% 27,46% 36,61% 39,32% - With the 4th level of decomposition, optimal for 16 samples blocks, the entropy is lower for only ~39 % parameters. Second Step – the block size is adapted to the decomposition level: Level lvl 1 lvl 2 lvl 3 lvl 4 lvl 5 Bloc size 2 4 8 16 32 Number of param 295 295 295 295 295 LeGall 17,97% 29,83% 37,97% 39,66% 43,73% Haar 16,61% 29,83% 38,31% 38,31% 42,71% DD 17,97% 28,81% 37,97% 39,32% 42,03% Level lvl 6 lvl 7 lvl 8 lvl 9 lvl 10 Bloc size 64 128 256 512 1024 Number of param 295 141 141 120 105 LeGall 44,07% 66,67% 61,70% 60,83% 51,43% Haar 44,07% 69,50% 58,87% 57,50% 49,52% DD 42,37% 65,96% 60,28% 61,67% 52,38% Due to the sample rate, there are not enough samples for the highest decomposition levels for some parameters. If we compare uniquely the parameters with enough samples we obtain: Level lvl 1 lvl 2 lvl 3 lvl 4 lvl 5 Bloc size 2 4 8 16 32 Number of param 105 105 105 105 105 LeGall 26,67% 40,95% 49,52% 58,10% 64,76% Haar 26,67% 40,00% 47,62% 53,33% 62,86% DD 26,67% 40,95% 50,48% 59,05% 63,81% Level lvl 6 lvl 7 lvl 8 lvl 9 lvl 10 Bloc size 64 128 256 512 1024 Number of param 105 105 105 105 105 LeGall 67,62% 68,57% 66,67% 57,14% 51,43% Haar 65,71% 66,67% 61,90% 54,29% 49,52% DD 65,71% 68,57% 64,76% 58,10% 52,38% - The results are similar, - The increase of decomposition level over 7/8 doesn’t ameliorate the entropy. In fact the wavelet transform imposes to code the coefficients with more bits (9 for Haar, 10 for DD and LeGall). Third Step – identification of the most efficient algorithm The chart hereafter identifies the algorithm which gives the best entropy: Haar LeGall DD Same Result Haar/LeGall lvl 1 62,71% 72,54% 0,00% 35,25% lvl 2 59,32% 46,78% 20,68% 26,78% lvl 3 57,97% 46,10% 21,36% 25,42% lvl 4 58,64% 42,71% 17,29% 18,64% lvl 5 55,59% 41,69% 18,98% 16,27% lvl 6 56,27% 42,37% 17,29% 15,93% lvl 7 36,88% 36,17% 26,95% 0,00% lvl 8 36,17% 36,17% 27,66% 0,00% lvl 9 30,83% 39,17% 30,00% 0,00% lvl 10 32,38% 38,10% 29,52% 0,00% - The Delaurier Dubuc seems less efficient than the 2 others - Haar seems more efficient for lower decomposition levels than LeGall. But as the entropy results are similar, we will have to test source coding algorithms on all three wavelet outputs. 4. Lossless source coding techniques 4.1 Main techniques The main principles of lossless coding are: - Dictionary coding: the principle is to replace a symbol or a group of symbols by a reference in a data structure (the dictionary): o Lempel-Ziv algorithms (LZ77 / LZ78 / LZW / LZSS ...). - Entropic coding: each symbol is replaced by a variable length code. This code depends on the probability of the symbol in the data set (most frequent symbols gets shorter codes than less frequent ones): o Golomb / Rice / CCSDS [6][7], o Shanon Fano / Huffman / Arithmetic encoding. These algorithms shall be non-adaptive (fixed dictionary or probability table), adaptive (the dictionary or table is built during coding) or half adaptive (two pass algorithm, first to build the dictionary, second to encode). - Other: the redundancy in the message is eliminated by other means: o “Standard” RLE (the sequence ‘AAAAAAF’ is replaced by ‘6AF’) 4.2 Algorithm selection and adaptation to wavelets We have chosen 3 of the 4 algorithms we have already tested in our previous studies: - LZW - Huffman - CCSDS RLE algorithm has been discarded. It shall be efficient with destructive wavelet algorithms where lots of coefficients are quantified to 0, but probably not with reversible wavelets. The algorithms have to be adapted for our tests: ETTC 2015– European Test & Telemetry Conference - 10 bits version for LZW and Huffman - 10 bits version + suppression of pre-processor stage for CCSDS algorithm Figure 3: CCSDS algorithm adaptation 5. Tests and results 5.1 Step 1 – Tests with all algorithms We will compare: Figure 4: wavelet + source coding processing With: Figure 5: standard source coding processing The Chart hereafter gives the proportion of parameter for which the coding gain evolves: haar LeGall DD haar LeGall DD Coding gain Amelioration 0,68% 0,34% 0,00% 3,05% 2,71% 2,71% Coding Gain degradation > 30 % 1,36% 1,02% 2,37% 16,27% 27,46% 30,17% 20 à 30 % 50,85% 48,47% 49,15% 36,61% 27,46% 24,75% 10 à 20 % 46,78% 48,81% 48,47% 40,68% 37,97% 38,31% > 0 à 10 % 0,34% 1,36% 0,00% 3,39% 4,41% 4,07% haar LeGall DD Coding gain Amelioration 25,42% 22,71% 24,07% Coding Gain degradation > 30 % 0,00% 0,00% 0,00% 20 à 30 % 0,00% 0,00% 0,00% 10 à 20 % 14,24% 12,88% 4,75% > 0 à 10 % 60,34% 64,41% 71,19% LZW Huffman CCSDS - The degradation of coding gain with LZW algorithm is very high! For the majority of parameters the gain diminishes with a 10 to 30 % factor. - Same observation with Huffman. The maximum degradation level is ever higher than LZW one… - CCSDS is the only algorithm which shows amelioration for some parameters. - The amelioration figure is the best with Haar wavelet. We will keep Haar wavelet and CCSDS source coding algorithm. 5.2 Step 2 - Haar +CCSDS algorithms Influence of the decomposition levels and of the number of bits The chart hereafter compares: - 9 and 10 bits wavelet coefficients, - 4 and 5 decomposition levels. 10 bits CCSDS 9 bits CCSDS 10 bits CCSDS 9 bits CCSDS Coding gain Amelioration 25,42% 30,85% 25,76% 28,47% Coding Gain degradation > 30 % 0,00% 0,00% 0,00% 0,00% 20 à 30 % 0,00% 0,00% 3,39% 2,71% 10 à 20 % 14,24% 9,49% 20,34% 17,63% > 0 à 10 % 60,34% 59,66% 50,51% 51,19% 4 Level Haar 5 Level Haar The 9 bits version is naturally more efficient. The additional decomposition level rather degrades the performance. Adaptation for the CCSDS 2nd extension option The 2nd extension version needs an even number of samples to work. In our previous tests it’s never used because we send an odd N-1 number of samples to the entropy coder. CCSDS New CCSDS Coding Gain Amelioration de 0 à 2,5 % 27,46% 33,90% de 2,5 à 5 % 2,71% 6,44% de 5 à 7,5 % 0,68% 1,36% de 7,5 à 10 % 0,00% 0,68% > 10 % 0,00% 0,00% Coding Gain degradation > 15 % 1,69% 1,02% 10 à 15 % 7,80% 4,41% 5 à 10 % 19,32% 17,63% > 0 à 5 % 40,34% 34,58% 4 Level Haar The previous chart gives the results obtained by adding a “0” value to the N-1 samples specifically for this option. The coding gain amelioration concern ~40 % of the parameters. Global coding gain with Haar+CCSDS ETTC 2015– European Test & Telemetry Conference The coding gain for all parameters is calculated with these hypotheses: - 16 samples blocks, - 4 levels Haar wavelet decomposition - Adapted 9 bits CCSDS algorithms with additional “0” value for 2nd extension option It is compared to the results obtained with “standard” 8 bits CCSDS: Standard CCSDS Haar+CCSDS Coding Gain 2,74 2,62  A preliminary wavelet pre-processing doesn’t ameliorate the coding gain of the CCSDS algorithm alone … We will study now how to use of the multi-resolution possibility of the wavelet decomposition. 6. Multi-resolution analysis 6.1 Independent coding of scaling and wavelet coefficients As we have seen earlier the wavelet algorithms can create two distinct blocks of data: - Scaling factors, which can be compressed by a CCSDS algorithm, - Wavelet coefficients, which can be processed by the modified version of the CCSDS algorithm we have discussed earlier. Figure 6: Independent coding of the two data flows It is possible now to transmit first the packets with the coded truncated mean values, and later the packets with the coded details. The chart hereafter gives the coding gains corresponding to this association: Blocs with 16 samples Blocs with 32 samples Blocs with duration < 0,5 second, min size 16 samples Blocs with duration < 0,5 second, min size 32 samples CCSDS 2,74 3,1 3,19 3,08 Harr+CCSDS 2,41 2,84 3,03 2,94 - The performance is reduced compared to initial CCSDS from 4.5 to 12 %, - The results are still good: other 3 with 0.5 seconds blocs with minimum size of 16 samples. So it is possible to send data in several parts, and still get interesting coding gains. But we must still make some data accumulation before sending the coded packets! 6.2 Alternate processing The scaling factors we obtain with Haar wavelet are truncated mean values which are still 8 bits long. They can be sent directly with no additional processing in the standard telemetry output: Figure 7: direct transmission of scaling factors The performance is obviously degraded: Blocs with duration < 0,5 second, min size 16 samples Blocs with duration < 0,5 second, min size 32 samples CCSDS 3,19 3,08 Harr+CCSDS 1,52 1,51 But it can be ameliorated: Figure 8: direct transmission of scaling factors – level 2 wavelet processing The results are better with this alternate processing: Blocs with duration < 0,5 second, min size 16 samples Blocs with duration < 0,5 second, min size 32 samples CCSDS 3,19 3,08 Harr+CCSDS 2,05 2,01 The coding gains are over 2, and the direct transmission is still rapid (the level 2 Haar processing doesn’t necessitate lots of operations). We then try to see if the size of the details packets can also increase the coding gain: Blocs with duration < 1 second, min size 16 samples Blocs with duration < 1 second, min size 32 samples CCSDS 2,21 2,17 Harr+CCSDS 1,7 1,68 It’s not the case. In fact CCSDS algorithm is very efficient with low entropy blocs. As the size of the blocs increase, the entropy tends also to increase. ETTC 2015– European Test & Telemetry Conference 7. Conclusion This study shows that: - Wavelet processing permits the preparation of data before the use of source coding algorithms with very simple calculation. - The three wavelet algorithms we have tested give similar results. The simplest (Integer Haar wavelet) is the most efficient because it permits further processing with fewer bits. - Wavelets don’t ameliorate the coding gains of the source coding algorithms we have tested. The CCSDS algorithm is still the most efficient. - The most interesting feature is the multi-level decomposition. This feature permits to rapidly transmit a meaningful part of information (mean value with Haar wavelet), and transmit later additional packets to get back the detailed data. 8. References [1] D. Schott: Use of source coding techniques on Ariane 5 telemetry data, European Test & Telemetry Conference, Toulouse, France, 2013 [2] D. Schott: Use of source coding techniques on Ariane 5 1553 data, European Telemetry Conference, Nuremberg, Germany, 2014 [3] ISO/IEC 15444-1: Information technology — JPEG 2000 image coding system: Core coding system [4] R. Calderbank, I. Daubechies,W. Sweldens, and B.-L. Yeo: Wavelet transforms that map integers to integers, Applied and computational harmonic analysis 5, pp 332-369 (1998) [5] Ciprian Doru Giurcaneanu, Ioan Tabus and Jaakko Astola: Integer Wavelet transform Based Lossless Audio Compression, Signal Processing Laboratory, Tampere University of Technology - P.O. Box 553, FIN- 33101 Tampere, Finland [6] CCSDS 121.0-B-1: Recommendation for space data system standards Lossless data compression, Blue book [7] CCSDS 120.0-G-2: Recommendation for space data system standards Lossless data compression, Green book

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Cubesat communication CCSDS hardware in S and X band - Issler Jean-Luc and Lafabrie Philippe – CNES – France Syrlinks has provided to ESA three FMs of its new X-band High Data Rate-TeleMetry (HDR- TM) transmitter for microsatellites. The Proba-V satellite was successfully launched last May 2013 and the transmitters are performing nominally. Following this success, Syrlinks is finalizing with CNES the development of a new solution to download payload telemetry in X- Band at high data rate for smaller platforms, such as Nanosatellites and CubeSats. The first elements of a functional prototype which is able to modulate data up to 100 Mbps using fully CCSDS compatible filtered OQPSK modulation and Convolutional Coding [7,1/2], delivers up to 2 Watts RF with no more than up to 10W DC/DC consumption, and fits inside a 0.25 Unit of a standard cubesat were presented at recent Small Satellite exhibitions. In first half of 2014, an EQM has been developed and the final evaluation tests are ongoing. This miniature X band HDR-TM transmitter is planned to be used on board OPS-SAT, an ESA triple Cubesat dedicated to test new space operation control concepts, currently planned for launch in 2016. It is also planed to be used on board EYE-SAT, a Student/CNES triplecubsat, also in 2016. In parallel, answering customer requirements, Syrlinks is also developing with CNES a new S -band transceiver which is fully compliant with CCSDS recommendations for RF, Modulation and Coding, and therefore with ITU EES frequency bands for TT&C: 2025-2100 MHz and 2200-2290 MHz. The transmitter can provide data rates up to 3Mbps (O QPSK with differential coding) with an adjustable output power from 27 to 33 dBm. The receiver supports data rates from 1 to 256 kbps (PCM/PM/SP-L). This integrated product (96x92x24mm when no diplexer is used) is a miniaturized version of an existing Syrlinks platform. In the first half of 2014, an EQM has been developed and evaluation tests are also on-going. This miniature S-band transceiver is also planned to be used on board OPS-SAT and it will satisfy the requirement that the cubesat will look like a fully CCSDS compliant spacecraft to the ESA ground control segment. The architecture of OPS-SAT, describing the S-band TTC and X-band HDR-TM will be presented. This paper provides information on these CCSDS compliant RF products. Using these products would not only guarantee a correct use of the allocated frequencies but also ease the possibility to re -use “standard” satellite ground stations for Nano/CubeSat missions.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Implementation of a high throughput LDPC decoder in space-based TT&C – Wen Kuang, Nan Xie and Xianglu Li - Inst. Of electronic engineering, china academy engineering physics – China Space-based TT&C system based on TDRS plays an important role in aerospace TT&C systems. And, it demands high coding gain and high throughput channel code technique. CCSDS recommended a single rate 7/8 QC-LDPC code for Near-Earth missions which could meet the requirements. An parallel decoder that transform QC-LDPC codes into approximate block quasi-cyclic LDPC codes is designed. Using VHDL language, the decoder is implemented on Xilinx virtex5 FPGA, and 1Gb/s and 5.5dB coding gain at BER=10-7 can be achieved.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference the Implement of IP-Based Telemetry System of Launch Vehicle A. Feng Tieshan1 , B. Lan Kun1 , C. ZhaoWeijun1 , D. Li Daquan1 E. Xia Guojiang 1 1: Beijing Institute of Astronautical Systems Engineering,Nandahongmen Road No1,Fengtai District ,Beijing Abstract: The growth of the Internet has led to many astronautical engineers to show their interests in this high-speed and flexibile system. This paper describes a new generalized method for implementing the telemetry system of launch vehicle.As opposed to the traditional data collecting system,the new system is constructed with specified switches as the central device. The signal conditioners and measurement devices are both configured with different IPs. The data produced by the palyload sensors are transmitted to the user's receiver with the Internet-like frame constructions. Many innovative details will be discussed in this paper. In Section I,a general configured system is proposed to simplified the progress of telemetry design. In Section II,some key technologies of packet telemetry is discussed, including the frame construction and the dynamic distribution algorithm of virtual links.Section III describes the devices' models and the simulation models of the whole system with OPNET Modeler. Section IV contains results and analysis of the simulations. Keywords: telemetry system, IP-Based, configuration 1. Introduction In recent years, with the development of avionics technology, it brings forward new requirements for network equipment of launch vehicle. More and more scientific data and flight photograph are transferred by wireless chanel. While the 1553B databus has a limited potential to accommodate astronautical equipments, an appropritate network framework is badly needed to be applied in astronautical equipments. The traditional Ethernet has more exceptional advantages than other networks. With the development of Internet, the Ethernet technology has become a dominant network technology in information technology field, which has significant advantages such as low cost, high speed, uniform standards, better bompatibility, etc. As a result, more and more astronautical engineers show their interests in the increasing Ethernet technology, which is proposed to be extended to the telemetry system in launch vehicle. However, the telemetry systems have the strict real-time and time-deterministic requirements, which are the weakness of Ethernet. In order to solve this problem, the deterministic system should be designed to improve the traditional Ethernet technology.An IP-Based telemetry system is proposed to overcome these drawbacks in this paper. 2. Architecture of the IP-Based system 2.1 Requirements The deterministic system is designed to satisfy the needs of telemetry system in launch vehicle for now the present age and the future. The followings are some of the principal requirements to be supported by the IP-Based system..  Scale of System 16 to 256  Data Rate 10Mbps to 1Gbps  Communication Distance Point-to-point communication distance up to 100 meters  Character The data transfer will be from mutlisources to multidestinations  Synchronization The clock error of every two nodes should be less than 100ns  Bus Management The bus activation, deactivation and configuration  Topological Structure Star Topology, Loop Topology, Line Topology 2.2 Architecture of the System Traditionally, the telemetry community has used a time division method of packaging the data into link-level transmission data frames for transport over radio links. With the growth of the Internet, the transmission of the telemetry packets often looks like a computer-to-computer communications as more telecommunications are used for data acquisition and distribution. Data communication will occur over wireless and wired links as well as fiber optic links. The IP-Based Telemetry Network (IPTN) is optimized from the traditional Ethernet technology. The equipments in the system will be distributed with an unique IP address. Besides all the network equipments will also have a MAC address when they were manufactured.Depending on the real-time protocol(like IEEE 1588), the whole system will have a relatively accurate operational clock. The IPTN communication profile is shown in Fig. 1. ETTC 2015– European Test & Telemetry Conference IEEE 802.3 Physical Layer or FC Physical Layer IEEE 802.3 Optimized MAC Protocol IEEE 802.3 Optimized Data Link Layer IP UDP Higher Layer Protocols Fig.1. IPTN communication profile In Fig 1, IEEE 802.3 Physical Layer has been used for references. In order to reduce the weight of electric cable and the influence of electromagnetic interference.The fiber (References as FC0) is also taken into account. The Media Access Control Layer and the Data Link Layer are the most important Layers which determine many integrated performance such as error rate ,packet delay and so on.With regard to the IEEE 802.3 Protocol Data Unit, IPTN comprises three additional fields which have been allocated on the first five bytes of the „data‟ field of the IEEE 802.3 original frame. In this way, the system could match the specific field to recognise the correct frame and discard the redundant frame.What‟s more , as we do not destroy the original format of the IEEE 802.3 frame, the frames communicating in the network can also be transmitted in the industrial switchboard, which could unify the communicational protocols on board and ground. The highly mature UDP/IP services is possible as well. The Internet Protocol sends or receives the network data of the net points. The IP is responsible for the fragmentation and re-assembly of blocks of messages. This is required when the amount of data needed to sent is greater than the maximum IP data payload of a single frame. The User Datagram Protocol (UDP) parses data from one or multiple applications to the lower network protocols. However the Transmit Control Protocol is most used in the Ethernet, the timing of the receipt is non-deterministic. As a result, UDP is preferable in the IPTN. The frame format is shown in Fig 2. Preamble Start of frame delimiter MAC Destination MAC Source Tag Length Payload CRC 7 octets 1 octet 6 octets 6 octets 4 2 octets 46~1500 octets 4 octets Fig.2. IPTN Frame Format 2.3 Key Technology Considering the practical application, the IPTN is much different from the traditional 1553B.The differences between these two networks is listed in Table 1. Table 1 differences between IPTN and 1553B Item IPTN 1553B Scalability 16 to 256 2~31 Nodes Data Rate 10Mbps to 1Gbps 1Mbps Transmit Mode Duplex Reliability High High E2E Delay Deterministic Deterministic Redundancy Yes Yes Topological Star Topology, Loop Topology, Line Topology Line Topology The disadvantage of Ethernet, opposite to an astronautical application, is a non intrinsic determinism of its access method to the physical support. To improve the bandwidth and real-time speciality, people put forward many ideas to make the telemetry system a determinate one. In spite of increasing the bandwidth to avoid conflicts, many other ways have been raised to confirm the real-time speciality of the whole system. IPTN, as a flexible and reliable system, has many ways to improve its electronic features. First, using the IEEE 802.3 PHY Layer to guarantee its 100/1000Mbps Ethernet communicating ports.Secondly, comprises three additional fields which have been allocated on the first five bytes of the „data‟ field of the IEEE 802.3 original frame.These new data fields are used to manage the redundant frames and to distinguish different data types. The transmission of the telemetry data from the payload segment to the user segment and the commands from the user to the payload are typical communications problems encountered in many settings. Synchronization is a key technology for telemetry system design. Based on the IEEE802.3 original frame, the synchronization information is also included in the improved packet structure. While the net is under operation, the network equipment will control the system clock and finish the synchronization of the crystal oscillators. The network message could be divided into two types. 1) Event message: a message with time stamp.When the message is delilvering in the network, the network equipment could calculate the point-to-point delay with the help of time information stored in the event message. 2) General message: Opposed to event message, the general message does not contain any time information, this kind of message is mainly used to build the master-slave relationships. Three steps are followed to achieve the synchronization of the whole net. 1) Establishing the master clock. In order to supervise the different clocks of all the network equipments, there should be a precise system clock in the whole system. ETTC 2015– European Test & Telemetry Conference This precise system clock could be obtained from Beidou II or GPS. Besides it can also be acquired by the high-precision clock source. 2) Synchronize the frequency of different clocks. The master clock send the sync message to the slave one, in which message there is time information stamped the sending time. The slave point receives this message and record the receiving time. While the same slave point receives at least two messages continuously, according to the sending intervals between contiguous messages. The slave point could adjust its clock frequency to the master point. In this way, all the equipments in the network could work under the unitive frequency. The regulation of clock frequency shows in fig 3. Master clock Slave clock T2 T3 T1 T3-T2 T1 T4 T4 T4-T1 Fig.3. the Schematic diagram of frequency adjustment 3) Synchronize the time of different clocks. The event messages are used in the process of clock synchronization. The detailed steps are as follows.  The master clock send the sync message to the slave point, including the sending time T1;  The slave point records the receiving time T2;  The slave point returns the request messages to the master clock, and records the local sending time T3;  The master clock records the receiving time of request message T4. Then sending back the delay request message including the time T4 to the slave point. The completion of the clock synchronization shows in the figure 4. Master clock Slave clock T1 T2 T3 T4 T4 T1 Tsm Tms Fig.4. the Schematic diagram of timing adjustment According to the four steps above, the slave clock could calculate the delay and off-time as follows. Delay = [(T4 – T1) – (T3 – T2)]/2 [1] Offtime = [(T2 – T1) – (T4 – T3)]/2 [2] Depending on the route delay and off-time, the slave clock adjusts its clock to the master clock. Data integration is a key function for telemetry syetem of launch vehicle. Compared with the traditional methods,the technology of net-based integration has its advantages in terms of reliability, data capacity, system intelligence, general duty and so on. In the launch vehicle data integration system, the configuration of the whole net is kernel, which decides some important designing targets, such as throughout capacity, packet loss probability and network delay. Considering the practical demanding, this article focuses on the configuration project of launch vehicle data integration system. The appearance of virtual local area network tremendously promotes the development of real-time Ethernet. When the IPTN is working, there would be many virtual links flow into one equipment. In that case, this equipment must deal with these flows according to certain algorithm, which decides the performance of the whole network. In IPTN, the equipment implement a scheduler to manage the flows.The algorithm is mainly based on the Quality of Service (QoS). The scheduler would record the waiting time of every virtual links. According to the weight of links‟ priority, waiting time and the message length, the scheduler would send the proper link and update the weight of every items. The simulation will show the results of different scheduled algorithm. 3. Simulation of IPTN 3.1 the models of the net equipment The network equipment models are simulated with OPNET Modeler. There are 2 kinds of models in the simulation of IPTN,communicating point and switch point. The communicating point in the network is shown in figure 5. Fig.5. the model of communicating point This node includes 6 parts. “Source” is used to generate the original data flow. “Pre-processor” is used to mark the timing stamp and assign the attributes for each virtual links. “pt_0” and “pr_0” is a mature function block to send and receive the network data. The results and synchronization is recorded by “Ete-delay-record” .At last, the data communicated in the network would flow into the ETTC 2015– European Test & Telemetry Conference “sink”.The sink extract the useful information and destroy the packet. The switch point receives the data flows from other communicating points. The schematic diagram of switch point is shown in figure 6. Fig.6. the model of switching point Every switching point has 9 ports to communicate with other points. There are 4 sub-models in each switching node. The function of “pr” and “pt” is used to receive and send the network packet. “Regular” is used to realize the MAC protocol of the switching point. “Scheduler” is the most important sub-model in the switching node. This block will pick up the certain attributes of every data flows and send it out according to some certain algorithm, which has been introduced briefly in 2.2. 3.2 the test scenario In this section, we consider a work-conserving system shared by multiple input processes under different service disciplines. Within each source, FIFO is assumed. In order to compare different schedule algorithms, two scenarios are built to prove the correctness of the whole system. 3.2.1 Scenario I In scenario I, eight communicating nodes( node1~node8), acted as data sources send network frames to node9 by routing switch node in the center. Fig.7. the topology of scenario I In switch node, two different methods are used to testify the effects. One method is that every node in the network adopts the FIFO service discipline. The other one is that nodes in the network have strict priority. Assume that, if i

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Archive: Session 8 ETSC presentations Length Date Time Name --------- ---------- ----- ---- 688128 2015-06-06 17:41 8-1- MAYER ETTC_2015_ETSC_GF.ppt 1818052 2015-06-10 02:28 CCSDS New for ETTC 2015.pdf 621568 2015-03-05 12:29 Ch10 Segregated Recording.ppt 1843461 2015-06-10 03:58 ETSC report.pdf 135413 2015-06-10 03:58 ETSC report.pptx 343552 2015-06-09 14:41 ETSC-2015 SC3.ppt 15256141 2015-06-10 13:34 ETTC2015ETSCLuc FALGA.pptx 0 2015-06-10 13:36 __MACOSX/ 120 2015-06-10 13:34 __MACOSX/._ETTC2015ETSCLuc FALGA.pptx 1094656 2015-06-09 14:33 IRIG106 Chapter 7 Telemetry Downlink.ppt --------- ------- 21801091 10 files

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Computational Results for Flight Test Points Distribution in the Flight Envelope and Dynamic Relocation A. L. Mallozzi1 , A. d’Argenio2 , P. De Paolis3 , G. Schiano4 1: Dipartimento di Matematica ed Applicazioni – Università degli Studi di Napoli “Federico II”, Via Claudio 21, 80125 Napoli, Italy 2: Dipartimento di Ingegneria Aerospaziale – Università degli Studi di Napoli “Federico II”, Via Claudio 21, 80125 Napoli, Italy - Italian Air Force - Italian Flight Test Wing (RSV) - Via di Pratica di Mare 45, 00040 Pomezia, Italy 3: Dipartimento di Ingegneria Aerospaziale – Università degli Studi di Napoli “Federico II”, Via Claudio 21, 80125 Napoli, Italy - Italian Air Force - Italian Flight Test Wing (RSV) Via di Pratica di Mare 45, 00040 Pomezia, Italy 4: Italian Air Force – 6° Reparto Manutenzione Elicotteri, Via di Pratica di Mare 45, 00040 Pomezia, Italy Abstract: A computational methodology for designing an experimental test matrix is presented. The problem consists in the optimal distribution of test points in a two- dimensional domain. Each test point is assumed to be the source of different fields which expose all other points to repulsive forces acting in different directions. The result of the mutual repulsive forces is a dynamic evolution of the configuration of test points in the domain. The method has been extended to the additional task of dynamically relocating the remaining test points, after an initial subset has been performed and a need to change the number of test points has arisen. Keywords: Flight Test, Envelope Expansion, Field Theory, Optimization, Spatial Location. 1. Introduction When designing a set of experiments, it is important to find an optimal test point distribution in order to maximize efficiency and minimize costs. In some cases, i.e. flutter or envelope expansion testing, the test organization deals with unique prototype. For this type of aircraft, limiting the total flight hours of testing is a mandatory requirement not only for the significant cost associated, but also for the consideration of minimizing the risk of failure/loss of the highly valuable asset; in the last decade, a fatal mishap to an F-22 prototype fully instrumented brought to almost one year of delay in the Raptor Development Program and, unfortunately, to the loss of the experimental crew. Cost wise is useful to highlight that the average cost of a 4th /5th generation fighter type aircraft could range between 50 K$ and 100 K$. Therefore, when testing is required in the entire flight envelope, it is essential to find out a way to distribute the test points efficiently in order to gather all the required data, but at the same time saving time and reducing the number of test points. This means that given some test constraints and key parameters to be evaluated, all efforts should be spent in order to optimize test points distribution, covering the entire envelope following the rules imposed by the objective functions, whose aim is populating the areas where the test execution has a higher priority based on engineering requirements. While the optimality criteria are dictated by the specific problems at hand, the optimization process itself is applicable to wide classes of problems. Many techniques can be found in the literature, although only few deal with the problem of spatial location [1-4]. The latter has been approached as a non-cooperative game in the companion paper [5]. In general terms the proposed method aims at locating points in a two-dimensional space according to soft constraints (minimization of potential) and hard constraints (boundaries of the permitted domain). The method makes use of the concept of potential of a point immersed in the field generated by other points [6, 7], producing mutual repulsive forces, in line with other optimization methods based on the analogy with physical systems, such as the Simulated Annealing. In this paper we describe the design of a test matrix for an envelope expansion flight test activity [8]. Our aim is to optimize the distribution of the test points inside the flight envelope. The flight envelope (Figure 1) is a region defined by aircraft limits; it represents the area where A/C (aircraft) is permitted to fly. Figure 1: The flight envelope Outside this envelope flight could be conducted only under a restricted and controlled test environment; for example, in order to define and certify the flight envelope, flight test pilots and engineers have to fly also outside its limits, that actually during the first flight still do not exist if not by analysis or modeling and simulation: flight ETTC 2015– European Test & Telemetry Conference envelope is always a result of interpolation of the flown test points, never an extrapolation. Classical methods used in this kind of tests are the so called Economy Methods, which consist on a choice of a subset of flight conditions in accordance with the build-up approach principle in dynamic pressure, and the Extensive Methods, which basically attempt to cover the most part of the flight envelope, resulting very expensive and time consuming. We considered that the main driving factors to be considered in this kind of tests are the requirements to be demonstrated by two categories of engineers: Structural Engineers and Systems Engineers. Test matrix is designed in order to give the opportunity to flight test engineers to gather all relevant data, necessary for the new store certification process. The objective is to locate a predefined number of points in the classic Mach number- Altitude envelope, in order to simultaneously maximize the mutual distances of test points in the envelope and optimize the distributions of the three major parameters (Mach number, altitude and dynamic pressure, or equivalent airspeed) according to desired engineering requirements. Moreover we expand the results to the problem of dynamically relocating points at a given stage of the test program, when contingencies require a revision of the amount of the overall number of experiments. The method makes use of the concept of potential of a point immersed in the field generated by other points, producing mutual repulsive forces. 2. The location problem The objective of the specific problem is the identification of the test matrix to simultaneously gather all relevant data supporting the evaluation of aero-elastic and environmental characteristics of an aircraft. This leads to an optimization process for distributing points according to the requirements of structural engineers, mostly interested in airspeed and compressibility effects, and systems engineers, mostly interested in altitude and airspeed effects. The key idea of the proposed method is to consider each point as the source of a different field for each parameter to be controlled and let the points move as a result of the mutual repulsive forces generated by the fields. Eventually the points will come to a rest when the equilibrium of forces is reached, which corresponds to the condition of minimum potential energy. More precisely, let n be a fixed natural number (n > 5) that is the number of the prescribed flight tests. Each test point is defined by a pair (Mi,Hi), where i is an integer number, i ϵ [1; n] = {1, ..., n}, Mi and Hi are real numbers chosen in the following sets: Mi ϵ [ML, MU] and Hi ϵ [HL, HU ], where the nonnegative constants 0 ≤ ML< MU and 0 ≤ HL< HU define the bounds of Mach number and altitude choices. An additional hard constraint on the test points (Mi,Hi) is the condition that the equivalent airspeed is bounded: Vi ∈ [VL, VU] (0 ≤ VL< VU): the equivalent airspeed can be computed as a function of Mi and Hi under the assumption of International Standard Atmosphere    C iiiii bHaMHMV  1, with a, b, c positive real constants. For the specific problem, three fields are introduced, associated with Mach number, pressure altitude and equivalent airspeed. The intensity of each field is a function of the value of the related parameter at the specific position of the point. Moreover Mach number and pressure altitude fields act only along the corresponding direction (Mach number and altitude respectively), while airspeed field acts radially in both directions. Thus the airspeed field plays the dual role of distributing points in airspeed and spreading points over the envelope. Engineering considerations suggest that large Mach number and airspeed and low altitude are more critical for aero-elastic and environmental issues, thus test points are expected to be more concentrated in the bottom right region of the envelope. This is achieved by establishing field intensity laws reflecting this objective: Mach number field intensity decreases with Mach number, altitude field intensity increases with altitude and airspeed field intensity decreases with airspeed. The relative importance of different parameters is attributed by properly scaling the intensities of the three fields. Let each test point be the source of three distinct fields, whose intensities are:          LU Li MMM MM MM KWm i 1          LU Li HHH HH HH KWm i 1          LU Li VVV VV VV KWm i 1 respectively where WM, WH, WV are positive real numbers (defining relative weight of the three fields), while KM, KH, KV are real numbers (prescribing the desired distribution trend of the corresponding parameters). The first two fields act along a single dimension (the respective parameter), while the third field acts radially. We assume repulsive forces proportional to the inverse of the cubic distance from the field source, the resulting accelerations (in the two directions: M and H) to which all points are subjected (except the first 5 points fixed for initialization) are:                                                                       1 ,1 2 22 1 ,6 3,1 3 j nj U ji U ji U ji V j nj j U ji M UM H HH M MM M MM m M MM m Ma j j i                                                               1 ,1 222 1 ,6 3,1 3 j nj U ji U ji U ji V j nj j U ji H UH H HH M MM H HH m H HH m Ha j j i (where the first fixed point (j=1) is at the bottom left corner of the envelope (ML, HL) and the third fixed point (j=3) is at the top right corner of the envelope (MU, HU). The points are then allowed to move sequentially in the envelope in response to the respective accelerations. At ETTC 2015– European Test & Telemetry Conference each iteration the time step is chosen such that displacements are progressively smaller and smaller (as the distribution converges toward the optimal solution) while the Mach and altitude hard constraints are not violated. Thus the potential energy of the configuration is:             n i U H U M H a M a HMJ ii 1 2 2 2 2 , with (M,H)=(M1,…,Mn,H1,…,Hn). The objective function is actually the cost function represented by the sum of the accelerations generated by all test points. To minimize the cost function is the goal of the proposed algorithm, achieved looking for an equilibrium condition, local or global. Each equilibrium condition of all the considered test points is represented by a local or global minimum value of the potential energy function. In fact, during the evolution of the transient location, part of the potential energy of the test points within the field generated by the other points is converted into kinetic energy, that eventually is dissipated instantaneously (a different approach could also introduce a friction coefficient). Therefore the equilibrium condition is characterized by a minimum value, local or global, of the potential energy function. 2.1. The execution order problem Once the test matrix is defined, a preliminary chronological order of the test points must be established. To this end several approaches can be followed depending on the particular application. In our example we considered two requirements: safety and efficiency. Given the hazardous nature of flutter (aero-elastic phenomenon) testing, safety is the first and paramount priority. Thus efficiency can be sought only when safety is assured. Assuming that a 20 KEAS (Knots Equivalent Air Speed) margin between test points is a cautious and safe approach to the envelope expansion task, test points are ordered with increasing airspeed; however, if more than a single point meet the 20 KEAS margin criterion, efficiency considerations suggest that points are ordered to best manage energy (either in ascending or descending or- der). The two forms of energy attributed to a flight condition represented by a point in the envelope are potential and kinetic energy, leading to the following expression for the specific energy (energy per unit weight): g V HSE 2 2 1  where H is the pressure altitude, V is the true airspeed (under the assumption of zero wind) and g is the acceleration of gravity. Of course this is just a possible simple criterion to attribute an a priori execution order. Depending on the complexity of the problem, several additional constraints might apply and the actual execution order might need to be dynamically adjusted while in progress based on the results gathered from previous points. However different choices of the execution order do not invalidate the effectiveness of the proposed method for identifying the location of the test points. 2.2. The relocation game Suppose that a test matrix has been designed and a certain amount of test points has been performed according to a predefined execution order. Suppose also that initially unforeseen events (partial test results, budget reviews, changes of the trial objectives) require a modification of the amount of test points. The relocation problem of the remaining test points (which may be either more or less than the original plan) can be approached similarly to the initial task described before. The only difference is that the remaining points must be distributed with an additional hard constraint: the presence in the envelope of the test points already performed along with their respective fields. With this minor adjustment, the same algorithm can be used for the relocation problem. In particular in this paper, we present the additional case of subtracting test points comparing optimal and sub- optimal test points distribution. 3. Algorithm and Results The solution of such a problem cannot be found analytically, thus an iterative process is adopted, letting the points evolve until numerical convergence is reached (the sum of all forces is below a given threshold). Repulsive forces are similar to those acting between electrical charges having the same sign, except that the intensity decreases with the cubic power of the distance (to reduce the effect of distant points compared to near ones). The acceleration to which a point is subject only depends on its position in the field and the field intensity (same as electric or gravitational fields). To improve convergence, momentum is not preserved from step to step: in other terms the point is allowed to move according to the acceleration imposed by the fields, but at the next step it is assumed initially at rest and it further evolves only by virtue of the new acceleration produced by the new spatial configuration, regardless of the previous velocity. All points move sequentially and the time step for each point is chosen in such a way that the distance travelled (at the given step) exponentially decreases with elapsed time (to improve convergence) and the point is not allowed to exit the permitted domain (violate the hard constraints). 3.1. The Algorithm Let the first five points stay fixed in the 5 corners of the flight envelope (hard constraint). The remaining n-5 points are free to travel within the permitted envelope; let the initial distribution of those points be according the following:        ni n iHHHH H ni n iMMMM M LUUL i LUUL i ;6 5 62 sin 42 ;6 5 62 cos 42                         ETTC 2015– European Test & Telemetry Conference The displacements consequent to the accelerations acting during the time step are computed as: 2 2 2 1 2 1 dtadH dtadM i i Hi Mi   thus ignoring any velocity gathered in the previous time steps, in order to facilitate convergence. Here:            ii HM a dH a dM dtdt maxmax min 2 ; 2 ;min where n dt 01.0 min   taMMMfdM iMiLU ,,,,max   taHHHfdH iHiLU ,,,,max  The displacements thus computed do not guarantee adherence to the last hard constraint: airspeed within the two permitted boundaries. An additional check must be performed: if computed time step and acceleration cause the airspeed to exceed the envelope boundary, then the new position is set at 90% of the distance between initial position and airspeed limit (along the direction of the calculated acceleration). Then acceleration is set to zero, because points constrained on the border are as- summed to be subjected to a reaction force (acceleration) equal and opposite to the force (acceleration) which tends to push them out of the envelope. Finally the position is updated according to the calculated displacements, the field intensities are updated pursuant with the new configuration i k i k i dMMM 1 i k i k i dHHH 1 dttt kk 1 Weights mMi, mHi, mVi are also updated at each iteration and we assume that the EAS weight decreases with time: initially points must be quickly spread over the envelope and the weight is large, then the weight must decay with time to the desired final value. More precisely we let:            LU L k i MM k M MM MM KWm i 1            LU L k i HH k H HH HH KWm i 1                      LU L k i t V k V VV VV VeWm k i 11001 10 The process is reiterated until a convergence cost function decays below a predetermined threshold. The convergence cost function is a measure of the residual accelerations to which the test points are subject, thus the potential energy of the configuration J(M,H). Convergence is reached when J is less than a predefined value (dependent on the number of points). 3.2. Test Case We present a test case with n=25 planned flight tests. The parameter choice is specified in the following. The flight envelope bounds are [ML, MU]=[0.1, 0.8]; [HL, HU]=[0, 3×104 ] the weights are WM=1, KM=-0.95; WH=2, KH=100; WV=500, KV=-0.8; and the constants in are a= 1116.46; b=6.87×10-6 ; c=2.62 Results for a twentyfive test points location problem are shown in Figure 2. Figure 2: The optimal distribution for 25 test points. We present now one of the two possible relocation problems: points addition. When the 15 points have been performed, the test management decides to increase the number of tests from 25 to 30 (for an overall number of thirty test points): in the new configuration the added test points are denoted with white circles. In this case the final point distribution (30 test points - Figure 3) can be defined as a sub-optimal distribution compared to the case where thirty test points are located in one step (Figure 4) without the costraint of the 15 points already located in the flight envelope. It is possible to observe the different distribution in the two cases. Figure 3: Test points addition ( 25 initial points – 15 performed points - 5 extra points) J=2.5*10 -4 ETTC 2015– European Test & Telemetry Conference Figure 4: The optimal distribution for 30 test points. The second possible case is points subtraction. In this case, when the 15 points have been performed, the test management decides to decrease the number of tests from 25 to 20 (for an overall number of twenty test points). Also in this case the final point distribution (20 test points -Figure 5) can be defined as a sub-optimal distribution compared to the optimal case distribution (Figure 6). Figure 5: Test points subtraction (25 initial points – 15 performed points - 5 less points) Figure 6: The optimal distribution for 20 test points. 4. Conclusions An optimization method based on the concept of fields is proposed for the identification of a two-dimensional test matrix. The experimental test point distribution is optimized according to tunable soft constraints and hard constraints. The method has been tested against a practical case: the simultaneous evaluation of aero-elastic and environmental characteristics of an aircraft. The method proved effective and computationally efficient: all the configurations tested came to a convergence in short time and the outcome was satisfactory. The method was extended to the additional problem of relocating part of the test points after the execution of an initial subset of experiments and following the decision of the test management to increase/decrease the number of experiments. The results were satisfactory also for this additional task. 5. References [1] Basar T. and Olsder G.J. (1999) Dynamic noncooperative game theory, Reprint of the second (1995) edition. Classics in Applied Mathematics, 23. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. [2] Drezner, Z. (1995) Facility Location: a Survey of Applications and Methods, Springer Verlag New York. [3] Hansen, P., Peeters, D., Richard, D. and Thisse, J.- F. (1985) The minisum and minimax location problems revisited Operation Research vol. 33, pp. 1251-1265. [4] Mallozzi, L. (2007) Noncooperative facility location games, Operation Research Letters vol. 35, pp. 151-154. [5] Mallozzi L., d’Argenio A., Di Francesco G, De Paolis P. (2015), Computational results for flight test points distribution in the flight envelope, Advances in Evolutionary and Deterministic Methods for Design, Optimization and Control in Engineering and Sciences, Edited by: D. Greiner, B. Galván, J. Periaux, N. Gauger, K. Giannakoglou, G. Winter, Computational Methods in Applied Sciences Series, Springer, Vol. 36, pp 401-409 [6] Mallozzi, L. (2013) An application of Optimization Theory to the study of equilibria for games: a survey, Central European Journal of Operations Research vol. 21, Issue 3, pp. 523-539. [7] D. Monderer and L. S. Shapley, “Potential games”, Games and Economic Behavior, vol. 14, pp. 124- 143, 1996. [8] d’Argenio A., de Nicola C, De Paolis P., Di Francesco G., Mallozzi L. (2014), Design of a Flight Test Matrix and Dynamic Relocation of Test Points, Journal of Algorithms and Optimization Vol.2, Issue 3, pp.52-60. J=2*10 -4 J=3*10 -2

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

High-resolution electro-acoustic transducer for dielectric characterization of outer space materials - Lucie Galloy-Gimenez, Laurent Berquez, Fulbert Baudoin and Denis Payan - LAPLACE, CNES – France Various aspects of the space environment can cause orbit satellite anomalies. Frequently used in satellite structures as thermal blankets or as insulators, dielectric materials tend to accumulate charges due to the flux of space charged particles. This charge accumulation increases the local electric field, responsible for electrostatic discharges. . To get a better understanding of the mechanisms of these discharges, it is necessary to understand the dynamics of charge transport in solid dielectrics used in outer space and thus to clarify the nature, the time evolution of the position and the amount of charge stored. The pulsed electro-acoustic – PEA – method is used to recover charges within dielectric materials of minimum 150 µm thickness. However, dielectric materials for space application are typically thin, less than 50 µm. With its spatial resolution on the order of 10 µm, the PEA method is no longer sufficiently precise. This study aims to make a high-resolution PEA cell, and more precisely an optimized acoustic sensor. Based on previous studies results, we develop an optimized acoustic sensor, which is the key of the PEA method. This detector is composed by a thin piezoelectric film and an impedance matched absorber material. In order to improve the spatial resolution of the PEA method, the piezoelectric film must have a thickness around 1 µm. As piezoelectric films of this thickness are not marketed, we deposit a P(VDF-TrFE) polymer by spin coating procedure directly on the flat absorber material. After its deposit, the P(VDF-TrFE) is poled by the corona discharge method in order to give it its piezoelectric properties. With this protocol, two piezoelectric films are realized with a thickness of 3.2 and 1.5 µm. After their experimental characterization; the acoustic sensors are fitted to the PEA cell. The assembly is then used to perform first ever; to our knowledge, measurements on the 50 µm thick PTFE with a spatial resolution estimated to 2.1 µm in the case of the piezoelectric film of 1.5 µm- thick.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Non-contacting Methods with Lidar for Spacecraft Separation Ranging - Shengzhe Chen, Hui Feng and Yuzhi Feng - Beijing Institute of Aerospace Systems Engineering – China Relative movements, especially ranging of two separating spacecraft units, are of great importance to data acquisition system and are key issues in spacecraft flight-testing as well. Besides, common detection for this department measurement relies on cable type displacement sensors, which is an mature technology yet to be developed, because some harmful obstruction result from the other part it may inevitably be introduced. Thus, a feasible non-contacting method of ranging measurement during the procedure of spacecraft separation is proposed in this paper. In order to detect the relative movement of the two separating spacecraft units, an Phase Lidar with CW laser source is designed to measure the ranging of spacecraft separation. In addition, considering the vibration and the shock caused, as a not-to-be-neglected part of separation environment, which probably lead to a deviation for vertical or horizontal separation, the laser source is armed with a scanner. With certain active light emission and reception, which means no cable connecting the two separating parts, the harmful negative factor caused by cable ones is naturally gone. However, this non-contacting ranging measurement is still under study, which means the above-mentioned equipments are modeling mathematically. With all the models for equipments established and arranged in order, a simulation is finished. Results show that during its detection area, which covers from 0-3 rad, ranging accuracy varies from 1% to 0.1% according to different scanning angle. This new mentioned non-contacting method may be more suitable for spacecraft separation ranging detection.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference How do you go about achieving your video recorder? (Chapter 2) A. Pierrick LAMOUR 1 , B. Loïc MAUHOURAT 1 1: TDM, 5 rue Paul Deplante, 33700 MERIGNAC, FRANCE Abstract: The second part of this paper aims to give information about how to design or choose a recorder. It doesn’t bring solutions but asks questions about necessary considerations for designing a recorder. 1- Introduction This paper focuses on 2 items present in a recorder and particularly in video recorder: • Routing data to the recorder • Recording system 2- Routing data to the recorder To limit installation costs, you need to use existing infrastructure and to save time. 2.1- Infrastructure Infrastructure limitation comes from your environment, your knowhow and, of course, your budget. Most of you use twisted pairs (Ethernet, PCIe, DVI, LVDS…), coaxial cable or optical fibers. But do you use the right standard on it? Do you use ancillary data? 2.2- Video standard Most of video standards are not limited to video but transmit data like time stamping, closed caption, user data. This information is important to reduce infrastructure and must be used to control or inform your recorder. Some COSTs permit to activate this data but not all of them. Choosing video standard is not quite simple. The chosen standard has to be compliant with your infrastructure, your video resolution and your recorder. It has also to be environmental proof and to be scalable. Analogical standards with the rise of digital disappear slowly. Standards like DVI, HDMI or Display port are quite simple to use but are limited in distance. Digital SMPTE Standards (SMPTE-259/292) are limited in resolution and are not really rugged. The new standard ARINC818 misses a real fully interoperable standardization but offers a lot of possibilities. Coaxpress standard is emerging but seems to be powerful. Compressed (H262/264/265) or uncompressed (GIGEVision) data on Ethernet using embedded network needs a powerful system but are quite simple in small installations. 3- Recording system In order to design or choose a recorder, many parameters must be taken into account: data to record, storage media, file systems, file format, environmental conditions. Identifying all these parameters is necessary to architecture your recorder: computing capacity, hardware acceleration, and storage media. 3.1- Data to record To record data, you have to acquire this data. The acquisition is characterized by the following parameters. “Acquisition rate” defines the data bitrate: minimum, maximum, average. “Link” defines the physical link which transports the data stream: Ethernet, PCIe cable, optical fiber, coaxial cable, etc. “Protocol” or “data format” define the manner which the data is transported. 3.2- Data storage 3.2.1- How to choose the media To store the acquired data, you have to choose a media. The media technology to use depends of several factors: - The way to exchange data between the recorder and the data player or the analyzer, - The media performance in function of the data to record, - The volume of data, - The data retention, - Environmental conditions, ETTC 2015– European Test & Telemetry Conference Several media are available: HDD, SSD, SD card, Flash, USB drive, disk sets, etc. 3.2.2- SSD Case Solid Sate Drives [1] are based on flash memory and are available in different forms factors 1.8’’ or 2.5’’ disks, mSATA, etc. They are usually more resistant to shocks or vibrations than HDD. They have better performance than HDD but they are more expensive. If you choose a SSD, be careful to these characteristics: - Number of Erase/Program cycles. - Sustain read and write rates, IO operations per sec, - Data retention time, - Capacity, - Garbage collection algorithm. In fact, the garbage collection algorithm is very important. Because of the flash memory principle: before writing data, a complete sector must be erased. The garbage collector aims to erase not- used sectors in advance in order to increase writing performances. To help the garbage collector, a TRIM command is usually added. This command is used to mark the data to be erased. Nevertheless, the used file system and all stacks between the file system and the drives (RAID, cipher) have to manage this command. 3.2.3- Disks set To increase capacity and performance, it’s possible to assemble several drives. This time again, different technologies are possible: RAID0, LVM with stripping. These two techniques parallelize read/write operations in between all drives. The OS can see the set of disks like a single drive. To increase capacity and reliability, you can use RAID5 or RAID6. This process isn’t recommended with SSD [5]. RAID1 increases the data reliability by writing all data to each drive by mirroring. Compared to LVM which is only implemented in software, RAID can be managed by hardware devices. Hybrid implementations of RAID exist; they are usually called “fake” RAID. Indeed, many chipsets integrate RAID functionality but they are not autonomous and need software to work. Studies reveal that full software solutions have better performance than “fake” RAID. You must choose the good trade-off between cost, performance and reliability. 3.2.4- Encryption Security is another notion that can be added to data storage topic. Indeed, the recorded content can be highly confidential. In this case, the data encryption must be used. Some SSD include cipher algorithms, hardware devices can also insure this function or software solutions are possible. Hybrid solutions are possible too. Some CPU like the “Intel® Core ™ i7” embed encryption dedicated instructions. These specific instructions set called AES-NI Intel® accelerate the cipher computing. 3.2.5- File systems In computing, a file system is used to control how data is stored and retrieved. In function of the chosen media and the OS of the data exploitation system, many file systems can be used: Flash file systems [2] are especially designed for storing files on flash memory. They typically include a wear leveling algorithm to spread writes over the chips. But they have to embed others algorithms like bad block recovery, power loss recovery, garbage collection and error correction. Journaling file systems [3] are file systems that keep track of the changes that will be made in a journal before committing them to the main file system. In the event of a system crash or power failure, such file systems are quicker to bring back online and less likely to become corrupted. They usually keep track of stored metadata only; it results in improved performance at the expense of increased possibility for data corruption. Power cut robustness is a well-known problem in embedded systems. Even if the file system prevents for data corruption, the media is not necessary protected against power cut. SSD or flash based drive controllers typically manage logical blocks tables, a corruption of these tables can result in a drive breakage. Fortunately, devices exist with energy reserve to insure the correct write of these tables. 3.2.6- File format To override the data corruption issue, the data file container is important. Two items have to be treated: - The data corruption detection, - The possibility to utilize the recorded data. To detect the data corruption into a file, data must be recorded by chunks which have to contain a data integrity control tool. It can be a checksum or a CRC but it can be also the chunk structure itself. Moreover, you must be able to utilize a file even if it is corrupted. In fact, if a file is damaged, it is ETTC 2015– European Test & Telemetry Conference necessary to recover most of data without a heavy post-processing. For instance, you can use MPEG2-TS [4] format to record compressed multimedia data. A MPEG2-TS file is composed of fix length packets. These packets have a length of 188 bytes. The first byte of a packet is a known pattern called “sync byte” with a value of 0x47. Each stream inside the file is identified by a PID. The packet header has also a continuity counter which is incremented for each packet of the same PID (each stream). If a part of the file is damaged, you simply have to seek to the sync byte in order to continue working with the data. Furthermore, if the file is not properly closed, only its end is missing, everything else can be utilized. We advise not to use file format with tail information in a system with power cut risk. 3.3- Architecture In function of all the parameters discussed above, choices have to be made to determine the recorder architecture. What is done by the software? What is done by the hardware? What kind of CPU has to be used? 4- References [1] http://en.wikipedia.org/wiki/Solid-state_drive [2] http://en.wikipedia.org/wiki/Flash_file_system [3] http://en.wikipedia.org/wiki/Journaling_file_system [4] ISO/IEC 13818-1 Information technology — Generic coding of moving pictures and associated audio information: Systems [5] “Don’t Let RAID Raid the Lifetime of Your SSD Array “ Sangwhan Moon (Texas A&M University) and A. L. Narasimha Reddy (Texas A&M University) 5- Glossary AES: Advanced Encryption Standard CPU: Central Processing Unit CRC: Cyclic Redundancy Check HDD: Hard Disk Drive IO: Input / Output LVM: Logical Volume Management PID: Packet Identifier RAID: Redundant Array of independent disks SD: Secure Digital (Card) SSD: Solid State Drive TS: Transport Stream (MPEG2) USB: Universal Serial Bus

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference SpaceWireless: Time-synchronized & reliable wireless sensor networks for Spacecraft Damon PARSY, CEO of Beanair GmbH, Kunliang YAO, CTO of Beanair GmbH, Mohamed-Yosri Jaouadi Software Architect of Beanair GmbH, Wolfener Straße 32-34, 12681 Berlin - Germany Abstract: Beanair GmbH designed a new generation of Time-synchronized and reliable wireless Sensor Networks for Structural Health Monitoring for Sapcecraft and Aircraft. Keywords: Wireless sensor networks, Ultra-Wide-Band, Structural Health Monitoring 1. Introduction The vast majority of existing wireless protocols are not designed to meet avionics and spacecraft design specifications. Factors such as low bandwidth, non-self- synchronized wireless network, non-existing lost data recovery mechanisms, overlapping band frequencies (2.4Ghz), unreliable Medium Access (CSM/CA mechanism) and non-adapted modulation techniques to spacecraft environment call for the introduction of innovative technological platforms based on newer standards. Such platforms are already being designed and tested by Beanair Research laboratories in conjunction with major partners and operators. The main task of the newly designed, “spacecraft/aircraft dedicated” platform is to challenge existing technologies by introducing a reliable, ultra-low power and time- synchronized (accuracy < 1μs) wireless network particularly adapted to dynamic measurement (10-20K KHz). Figure 1: Deployment of Spacewireless WSN inside the Ariane 6 Space launcher Thanks to the integrated coherent receiver, multi-path fading is used to strengthen the received signal "UWB channels are delay dispersive, with rms delay spreads on the order of 5-50 ns. Due to the large bandwidth and resulting fine delay resolution, a coherent (Rake) receiver sees a large number of multipath components. This has the advantage of a high degree of delay diversity, so that small-scale fading fluctuations are almost completely eliminated" 2. Time-of-flight calculation Clock synchronization use two-way ranging (TWR):  No need of common clock reference  Two-way ranging eliminates the error due to imperfect synchronization between nodes, relative clock drift still affects ranging accuracy. (function of treplyB duration because of the clock drift error accumulation) o tp is in ns unit o treplyB should be in µs unit Figure 2: Two-way ranging mechanism In this scheme ranging capable device A (RDEV) begins the session by sending a range request packet to device B. Then device B waits a time treplyB, known to both devices, to send a request back to device A. Based on that packet, device A can measure the round-trip time troundA = 2tp + treplyB and extract the one-way time-of-flight, tp, with respect to its own reference time. The large bandwidth of the UWB signals enable accurate ranging estimations of less than 3 ns. 7. Glossary UWB: Ultra Wide Band WSN: Wireless Sensor networks SHM: Structural Health Monitoring

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Laboratory Test Mean scalable to the test Sylvain DELRIEU1 1: Airbus Operations S.A.S, 316 Route de Bayonne Toulouse France Abstract: Aircraft Test Centres use flight and ground tests means to complete aircraft certification. In this scope Testers operate Laboratory Tests Means (LTM) to validate Aircraft functions which are getting more and more complex. Moreover certification deadlines and tests campaign costs are getting more and more challenging and demand LTM use optimization. In this context current LTM development approach is no more suitable to face these new constraints. So far LTMs are designed when testing strategy for a new Aircraft is defined and design is quite specific. Drawbacks of such an approach are: Tunnel effect for LTM development, no simple sharing of testing resources, LTM reuse is not easy, LTM upgrade requires re-engineering and a lot of LTMs have to be maintained even if partially used. As a result future LTMs shall be scalable to the tests and tests shall not be dedicated to a LTM. A modular, distributed and opened architecture based on standards will be an enabler to run tests in a new way. Agility in the testing process and LTM configuration will allow adjusting test means to the test procedure. LTM will easily adapt to A/C changes and testing strategy priorities. Tests will be run in parallel using only required resources. Interconnection of distributed LTMs will improve testing capabilities. Combined to remote and virtual testing capabilities next generation of LTMs will become less and less a frozen local device. Shared by design office, test centre and A/C suppliers LTM will also set more synergies between each stakeholder providing appropriate services. Keywords: Aircraft Test Centres, Ground Test Means, Agility 1. Definition and Context The Laboratory Test Mean (LTM) provides the Test Specialists with all the required services to conduct the test and provide systems under test data. System Under Test (SUT) is not part of the LTM. AIRBUS test centre owns a wide range of LTMs for each aircraft programme (Single Aisle, Long Range, A380, A400M,…).They are used for system integration, validation and operational tests for more and more complex aircraft functions. Figure 1 AIRBUS Laboratory Test Mean 2. Issues When a testing strategy for a new aircraft is defined a new LTM is especially designed. Issue: If aircraft change or testing perimeter evolution impacts LTM it is difficult for LTM to be easily adapted. For example modifications concerning aircraft wiring can involve LTM deep modifications. On top of that for instance if a new acquisition system is needed to fulfil new testing requirements, delay for supply of provisions can postpone test campaign. This looks like a tunnel effect. Figure 2 Tunnel Effect for Once LTM is developed ETTC 2015– European Test & Telemetry Conference LTM cycle life can be summarized as follow: Figure 3 LTM cycle life LTM installation and commissioning: In this phase LTM components are integrated and validated with preliminary systems under test. Issue: This step is done for each LTM as LTM are dedicated to a set of tests for each aircraft program. LTM configuration: This step enables to configure LTM with of a set of aircraft inputs including systems under test definitions Issue: LTM configuration has to be anticipated to be compliant with next tests procedures to be run Test preparation: In this stage all information specific to the test is defined and used for LTM preparation. A Set of tests preparations is linked to one LTM configuration. Issue: Test inconsistency is possible regarding LTM configuration as the configuration is not straight to the test. Test execution: Test is run in consistency with LTM configuration and test preparation Issue: When test is running LTM maintenance is excluded. Moreover as design for each LTM is quite specific sharing testing resources is complex even if a LTM is not fully used during A/C life cycle as shown below Figure 1 LTM use rate LTM maintenance and upgrade: In this phase LTM is updated for maintenance or LTM evolutions. Issue: All LTMs have to be maintained even if partially used and LTM upgrade requires re-engineering work. Test execution has to be stopped for maintenance or upgrade purpose. 3. Main topics To face shorter and shorter deadlines it is necessary to develop LTM which can easily adapt to changes. To face costs challenges it is mandatory to optimize LTM use. In another words LTM shall be scalable to the tests. New LTM cycle life shall be: Figure 5 Next generation LTM life cycle In this new process as LTM is scalable to the test there is no more dedicated step to install it. On top of that there is no more specific LTM configuration phase as it is deduced from the test preparation. Maintenance or LTM upgrade can also be done in parallel of a test on unused testing resources as only required resources for the test are allocated. 3.1 Partial scalable LTM A first step is to get LTM scalable in terms of testing resources. By testing resources we mean hardware and software including applications used for a test. The idea is ETTC 2015– European Test & Telemetry Conference to allocate testing resources regarding test definition to the minimum of testing resources used for a test. An application of this architecture is to manage for each ATA system or set of ATAs systems a pool of testing resources shared though A/C programs. As workload is closely dependent on A/C cycle life for each ATA system, LTM spare is computed as a margin allowing to test in parallel several A/C programs. Unlike LTM architecture where testing resources are not shared if a testing resource is out of order test can be run taking advantage of the global available testing resources. Figure 6 LTM sharing LTM for each ATA system Process involves LTM manager responsible for scheduling test and test specialists who conduct the test. Figure 7 new LTM stakeholders At hardware level LTM scalability will enable to share for example Acquisition and Generation systems used to communicate with aircraft equipment. For each aircraft type of resource a multiplexer manages aircraft signals used in the frame of the test. This approach enables to run in parallel several tests using the same generic LTM. Figure 8 Hardware Testing Resources scalability At application level from a test campaign perspective a LTM can also be seen as a pre-flight test mean. Next LTM shall be able to be easily scalable for both LTM and flight applications. Figure 9 Flight Test / LTM applications scalability 3.2 Full scalable LTM A second step to get LTM fully scalable to the tests is to be able to add any equipment on demand for the test. Of course this capability is quite easy to implement with virtual equipment but also has to be considered with real equipment. This is possible with a full network oriented solution making communicate LTM components and real equipment. Network configuration can be for example automatically deduced from aircraft wiring diagrams. This kind of solution brings limitations in term of representativeness mainly for inter-systems ETTC 2015– European Test & Telemetry Conference communication. However it remains valid depending on the type of test and is in line with new avionics technology which is more and more networks oriented concerning inter-systems communication. Figure 10 Full scalable LTM As a result this principle eases interconnection of real equipment and Laboratory Test Means and opens new capabilities for distributed testing. 4. Architecture Principles To match with main principles a modular and data centric architecture can be seen as a premium solution. 4.1 Modular Approach Of course LTM shall be modular to adapt to the test but the key point is to define right level of modularity. Level of modularity shall be a balance enabling to easily integrate LTM components and real systems coming from different suppliers but also not putting too much complexity in term of integration. As a first approach we could define as a module: - An aircraft acquisition / generation system - A virtual equipment (simulation) - A Test Application - A Real equipment As a consequence modules have to share the same infrastructure managing communications. Moreover as everything is driven by the test modules shall support a common configuration interface which will ensure consistency. 4.2 Data-Centric Approach LTM components or aircraft equipment are seen as a module. Each module either produces or consumes data, and what is important for a functional test is the data. A data-centric system infrastructure may provide a real-time view of the data reachable by all the modules. Communication infrastructure shall guarantee quality of service regarding consumer’s needs. This capability facilitates the evolution of a true plug-and-play LTM. The ability to quickly bring together several modules into a single functional test environment shall boost savings in set-up time and cost. It enables also to select the best-in- class modules. Figure 2 Next LTM architecture principles 5. Conclusion To face future certification and cost constraints next LTM generation has to act as a chameleon: process will be “test-centric”, LTM will be exactly adapted to the test. Combined to another approaches as virtual testing, test automation and remote testing LTM will improve more and more testing productivity. Challenge will be to guarantee configuration management as each configuration is unique and driven by the test. 7. Glossary ATA Air Transport Association of America LTM Laboratory Test Mean A/C Aircraft ETTC 2015– European Test & Telemetry Conference

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Optimized Automatic Calibration Tool. Application for Flight Test Programs. José Manuel Baena, Lorenzo Miranda Airbus Defence & Space, Madrid (Spain) Abstract: The Industrial Revolution and its “serial manufacturing” derived in the need for solving the problem of interchangeability of elements. This issue was overtaken by establishing metrological criteria to limit the validity of test data. With this information the designer could calculate the tolerance and uncertainty of use necessary for the equipment manufactured. The increasing number of parameters required in tests, makes it necessary to find efficient solutions that balance the time, cost and quality of the instrumentation for tests measures. The particularity of the test measures has generated controversy in how obtain minimum quality and quantification of the measurement in order to undertake efficient ways of testing. This article describes the processes and tools used to calibrate a large number of data acquisition channels, obtaining the information required for the treatment of the parameter and the interval of its validity according to the metrological standards, maintaining the balance between quality, time and cost. Keywords: Calibration, Optimization, Flight Test 1. Introduction The instrumentation for measures tests, as being intended for this purpose, must meet stricter requirements that for measures in series elements. Although, in both instrumentations must be ensured the traceability of measurement, in the series one this activity is made by a metrological verification of its tolerance, while in the test instrumentation is made by calibration that also identifies measurement uncertainty. *** Calibration must be performed following the stated metrological standards ***. In our case, in addition to the procedures stablished by the company [2][3] it is followed the criteria marked in the GUM [1], used by accredited International Laboratories of Metrology (e.g. BIPM, CEM, PTB, NIST….). An analogic channel calibration, at least, requires between 10 and 20 calibration points and these points to be validated need about 200 samples. Additionally all equipment used have to be equipment calibration standard with the implications of cost and availability that entails. In the last years the requirements for Flight Test have increased considerably in the direction of requiring a big quantity of parameters, in the order of cents, that had to be commissioned in few days. So, we have to look for solutions that optimize the cost and time of calibration. This article describes the solution for the calibration of 192 analogic channels required to measure temperatures with RTD's PT100 sensors. 2. Flight test instrumentations measures. The measures in this trial are made using a data acquisition system that outputs information through ETHERNET for recording and analyzing. It’s composed of 12 conditioner cards for PT100 sensors with 16 channels each, it may acquire up to 192 parameters. The cards polarize the RTD with a current of 1.6 mA and linearize its response according to the α380 function. The measurement range of the channels has been programmed from -200ºC to 540ºC. Figure 1 shows the measurement chain. Figure 1: Measurement chain. 3. Analogic channels calibration (DAS). In any instrumentation for testing measures is essential to perform its calibration. Once stated this fact, it should clarify that the aim of the calibration is, in addition to improving the metrological characteristics of the instrument, identify and quantify the credibility of the measure. *** The purpose of calibration is as follows: • Ensure that the measurement obtained is traceable to the primary pattern. • Get the relationship between the magnitude of measurement and instrument unit. • Identify the range within which the measure will be found with a probability of 95%, if instead of measuring with the test instrument you have measured with the primary standard. This range is called “measurement uncertainty”. • Get the calibration certificate, that in addition to give evidence of the quality measurement, provides information that allows limiting the validity of the data obtained in the trial and additionally establishing the reproducibility conditions. ETTC 2015– European Test & Telemetry Conference At first sight the achievement of these objectives seem to be difficult, even more when there are so many elements and it should be done with limited resources and time. The reality shows that it is possible to comply with the metrological quality criteria stated and within the required time. Moreover, as a result of it, it is possible to use acquisition cards whose re-adjustment period recommended by the manufacturer has expired, thereby optimizing available resources. The configuration of equipment to perform the calibration is shown in figure 2. Figure 2: Calibration system. The calibration system is designed to be able to address the high number of RTD conditioning channels, to the calibration standard device. So, it is integrated into the acquisition chain a multiplexer that allows selecting channels automatically. The acquisition of the calibration points will be performed in three steps: • The conditioner channel is addressed by means of the multiplexer through RS232. • The setpoint of the simulated temperature is set by the RTD standard through RS232. • The converter number corresponding to the simulated temperature on the selected channel is read through ETHERNET. Ideally, we have only to acquire sequentially all the necessary setpoints and address all channels to the calibration process. The reality is different because it is necessary to wait for the stabilization both the setpoint and the measure. For this reason it is necessary to identify when the acquired information is valid. The validation criteria is which will reduce the time of selection of setpoints and channels switching, optimizing the calibration. To stablish the validation criteria, the study has focus on the evolution of the measure in both ways, the transition between setpoints and the stabilization of it. 4. Problem of validation of setpoints. To identify and optimize the range of validity of a measure required in tests, it is necessary to calibrate the measuring instrument. This means that it has to be discarded the nominal coefficients applied in the function relating the units of the instrument with the magnitude of measure and it has to be use those obtained in the calibration. Therefore, the calibration aim is double, obtain the relationship between the instrument unit and magnitude and obtain the measurement uncertainty. For doing it, it is necessary to acquire a number of samples per setpoints, that depending on the quantity, will result on a more or less reliable calibration. In one hand, acquire many points will give greater reliability to measure, but on the other hand, it will increase the calibration time. The figure 3 represents the samples of indication of temperature for a simulation of a transition from 0ºC to 10ºC. Figure 3: Transition from 0ºC to 10ºC. In this figure can be seen that for a simulated temperature of 0ºC it is obtained a reading about -2ºC and for a simulated temperature of 10°C a reading of 8ºC is obtained. This result does not necessarily mean that the instrument is damaged but it has drifts in electronics so it needs a readjustment. This happens in major or minor degree in all the instruments of measure, and this is the reason why a period of validity the calibration of the instrument (coefficients + uncertainty) is applied. Calibration has two objectives: • Obtain the relationship between the instrument reading and magnitude measured. In that case the objective is to find the relationship between the number of converter A/D and the simulated temperature by a standard device which fixes the RTD PT100 equivalent resistance. The ratio is obtained by applying the least squares method, obtaining the coefficients for the regression line of the function relating the two values. Additionally the calibration residues will be obtained, they represent the deviation between the points obtained and the regression line. • Obtain the measurement uncertainty which will limit the validity of data. In tests, we have to give a range within which the measure will be found if made with the primary standard and with a probability of 95%. The uncertainty calculation is performed according to the metrological criteria established in GUM [1]. ETTC 2015– European Test & Telemetry Conference For both the coefficients and the uncertainty obtention is necessary to have a number of samples enough for giving reliability in calculation. The number of samples will depend, in one hand on the equipment accuracy and on the other by the number of setpoints required. In figure 4 the samples are shown before and after the transition from 0ºC to 10ºC is represented. Figure 4: Samples before and after the transition. In this graph it can be seen that there are two circumstances that will have influence in the measurement result: • A: This area represents the transition with overshoot and damping. • B: This area represents the stabilized signal, where only the instrument accuracy has influence. Accordingly, to validate the setpoint will be necessary to identify the samples acquired in the transition in order to discard them. Once it is identified the stable area, it must be acquired a number of samples to calculate the mean and standard deviation needed for the calculation. The area "B" represents the instrument accuracy, defined as the ability of the instrument to repeat the measurement result when it's applied to it the same amount of magnitude. In Figure 5 is shown with more detail the dispersion of measures for 10°C. Figure 5: Detail the dispersion of measures for 10°C. Figure 6 shows the dispersion for 10°C in instrument units, given in number of converter of 16 bits. Figure 6: Dispersion for 10°C in instrument units. Making a continuous acquisition and representing the probability function of the samples acquired it is obtained Figure 7. Figure 7: Probability function of the samples acquired. In Figure 7 it can be seen that the probability function follows a normal distribution, this means that it could be used the usual statistical tool. The problem arises with the number of samples necessaries, since a small number, may not be representative, while a high number will make the calibration slower. The number of samples used in Figure 7 is 106.000, knowing that they are acquired to 256 s/s, the validation time could reach 7 minutes. This result forced to seek a method to validate the setpoint more efficiently. 5. Validation criterial setpoint. To calculate the mean and standard deviation of the set point, we will use 100 samples, filling a shift register of 100 elements, so that all samples will pass from the first position to the last, after which it will be discarded. The register is divided into five blocks with 20 samples each. Each time a sample is entered and the displacement is performed, the mean and standard deviation for each group of 20 samples is calculated. The values are passed to a validation module where the data are evaluated, aiming to fulfill the validation criteria which identifies and discards transients and states where the measure has not been yet stabilized. In the Figure 8 we show the block diagram. ETTC 2015– European Test & Telemetry Conference Figure 8: Validation criterial. The validation criterion looks for a representative pack of samples of a stable measure. For it, it must be guaranteed in the analysed package the following: • The average value should not follow an increasing or decreasing function. • In each setpoint, the noise should be uniform. This criterion allows obtaining a significant sample pack and gives information about possible failures in the acquisition channel, such as thermal drifts or interferences that affects to the quality of the measure, and lead to discarding the module. 6. Calibration Calculations. All the channels are calibrated in the range from -200ºC to 540ºC. It will acquire 22 setpoints. Two of them correspond to the maximum and minimum range points, the rest of 20 are distributed uniformly between 90% and 10% of the range. The validation of setpoint is performed according to exposed in this article while the information is as followed: • Best Straight Line (BSL) coefficients Obtained according to the following expression: bmxy += Where: ( ) 2 11 2 1 11       − − = ∑∑ ∑ ∑∑ == = == n i i n i i n i n i ii n i ii xxn yxyxn m ( ) 2 11 2 1 111 2       − − = ∑∑ ∑ ∑∑∑ == = === n i i n i i n i n i iii n i i n i i xxn yxxyx b • Combined uncertainty of measurement (Uc) Calculated following the guidelines in the GUM. The expression is composed by the following contributions: 2 4 2 3 2 2 2 1 UUUUUc +++= Where: U1 is the uncertainty calibration of the standard device from its Certificate of Calibration ucc, corresponding to a normal distribution function or Gauss: 2 1 ccu U = U2 is the uncertainty from the samples; it is given by the experimental standard deviation of the mean, normal distribution function. To estimate the standard deviation of the data from a sample, i.e., a set of observations of a particular magnitude taken under the same conditions, the experimental standard deviation σ (x) is used: ( ) ( ) ( )1 2 1 − − = ∑= n xx x n i i σ The best estimate of experimental standard deviation, it is the experimental standard deviation of the mean: ( ) ( ) ( )1 2 1 2 − − == ∑= nn xx xU n i i σ U3 is the uncertainty calculated from the maximum error in absolute value obtained from the residues of the Best Straight Line (BSL), treated as a rectangular distribution function: 3 3 LMaxErrorBS U = U4 is the uncertainty due to the quantification error of converter of data acquisition system card, treated as a rectangular distribution function: ( ) 3 2 4 /tionErrorQuantizita U = • Expanded Uncertainty Is the result of multiply the Combined Uncertainty by a Coverage Factor K. ( )yUKU c×±= For a coverage factor K = 2 the probability of containing the true value is 95%. UYTrueValueUY +≤≤− [1] [2] [3] [4] [5] [6] [7] [8] [9] [11] [10] ETTC 2015– European Test & Telemetry Conference 5. Conclusion The increasing demand for trials as well as the high number of parameters required on them, force to find solutions that give the possibility of having available the results in a short of period of time, keeping the balancing between "Quality - Time - Cost". In the instrumentation of measures trials, it has been demonstrated the need to maintain metrological quality criteria for the serious consequences that can result for not limiting the validity of the data correctly. The consequences would be such that: • Not all manufactured equipment with same procedures have the same functionality. • Manufacturing process is more demanding and expensive than necessary. Taking into account that increasing the resources, would increase the cost exponentially, it is necessary to reduce the calibration time used, both data acquisition and analysis as well as in documentation generation. The automation of the calibration is mandatory, but not enough, we have to include validation criteria that allows to have a significant number of samples, with a low acquisition time. This criterion is applied to 192 conditioning and measurement channels of RTD’s PT100 temperature sensors. The comparative time is the following: • Manual calibration: 150 hours • Automatic calibration: 80 hours • Automatic calibration with validation algorithms: 45 minutes. 6. References [1] Guide to the expression of Uncertainty in Measurement, JCGM, 2008 [2] Quality Procedure DP-000-021, Airbus Defence & Space. Internal Document, 2009. [3] Standard CASA-1294, Airbus Defence & Space. Internal Document, 2014. 7. Glossary BSL: Best Straight Line. BIPM: Bureau International des Poids et Mesures. CEM: Centro Español de Metrología. DAS: Data Acquisition System. GUM: Guide to the expression of Uncertainty in Measurement. JCGM: Joint Committee for Guides in Metrology PTB: Physikalisch-Technische Bundesanstalt. NIST: National Institute of Standards and Technology. RTD: Resistance Temperature Detector.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Optimized Automatic Calibration Tool. Application for Flight Test Programs. José Manuel Baena, Lorenzo Miranda Airbus Defence & Space, Madrid (Spain) Abstract: The Industrial Revolution and its “serial manufacturing” derived in the need for solving the problem of interchangeability of elements. This issue was overtaken by establishing metrological criteria to limit the validity of test data. With this information the designer could calculate the tolerance and uncertainty of use necessary for the equipment manufactured. The increasing number of parameters required in tests, makes it necessary to find efficient solutions that balance the time, cost and quality of the instrumentation for tests measures. The particularity of the test measures has generated controversy in how obtain minimum quality and quantification of the measurement in order to undertake efficient ways of testing. This article describes the processes and tools used to calibrate a large number of data acquisition channels, obtaining the information required for the treatment of the parameter and the interval of its validity according to the metrological standards, maintaining the balance between quality, time and cost. Keywords: Calibration, Optimization, Flight Test 1. Introduction The instrumentation for measures tests, as being intended for this purpose, must meet stricter requirements that for measures in series elements. Although, in both instrumentations must be ensured the traceability of measurement, in the series one this activity is made by a metrological verification of its tolerance, while in the test instrumentation is made by calibration that also identifies measurement uncertainty. *** Calibration must be performed following the stated metrological standards ***. In our case, in addition to the procedures stablished by the company [2][3] it is followed the criteria marked in the GUM [1], used by accredited International Laboratories of Metrology (e.g. BIPM, CEM, PTB, NIST….). An analogic channel calibration, at least, requires between 10 and 20 calibration points and these points to be validated need about 200 samples. Additionally all equipment used have to be equipment calibration standard with the implications of cost and availability that entails. In the last years the requirements for Flight Test have increased considerably in the direction of requiring a big quantity of parameters, in the order of cents, that had to be commissioned in few days. So, we have to look for solutions that optimize the cost and time of calibration. This article describes the solution for the calibration of 192 analogic channels required to measure temperatures with RTD's PT100 sensors. 2. Flight test instrumentations measures. The measures in this trial are made using a data acquisition system that outputs information through ETHERNET for recording and analyzing. It’s composed of 12 conditioner cards for PT100 sensors with 16 channels each, it may acquire up to 192 parameters. The cards polarize the RTD with a current of 1.6 mA and linearize its response according to the α380 function. The measurement range of the channels has been programmed from -200ºC to 540ºC. Figure 1 shows the measurement chain. Figure 1: Measurement chain. 3. Analogic channels calibration (DAS). In any instrumentation for testing measures is essential to perform its calibration. Once stated this fact, it should clarify that the aim of the calibration is, in addition to improving the metrological characteristics of the instrument, identify and quantify the credibility of the measure. *** The purpose of calibration is as follows: • Ensure that the measurement obtained is traceable to the primary pattern. • Get the relationship between the magnitude of measurement and instrument unit. • Identify the range within which the measure will be found with a probability of 95%, if instead of measuring with the test instrument you have measured with the primary standard. This range is called “measurement uncertainty”. • Get the calibration certificate, that in addition to give evidence of the quality measurement, provides information that allows limiting the validity of the data obtained in the trial and additionally establishing the reproducibility conditions. ETTC 2015– European Test & Telemetry Conference At first sight the achievement of these objectives seem to be difficult, even more when there are so many elements and it should be done with limited resources and time. The reality shows that it is possible to comply with the metrological quality criteria stated and within the required time. Moreover, as a result of it, it is possible to use acquisition cards whose re-adjustment period recommended by the manufacturer has expired, thereby optimizing available resources. The configuration of equipment to perform the calibration is shown in figure 2. Figure 2: Calibration system. The calibration system is designed to be able to address the high number of RTD conditioning channels, to the calibration standard device. So, it is integrated into the acquisition chain a multiplexer that allows selecting channels automatically. The acquisition of the calibration points will be performed in three steps: • The conditioner channel is addressed by means of the multiplexer through RS232. • The setpoint of the simulated temperature is set by the RTD standard through RS232. • The converter number corresponding to the simulated temperature on the selected channel is read through ETHERNET. Ideally, we have only to acquire sequentially all the necessary setpoints and address all channels to the calibration process. The reality is different because it is necessary to wait for the stabilization both the setpoint and the measure. For this reason it is necessary to identify when the acquired information is valid. The validation criteria is which will reduce the time of selection of setpoints and channels switching, optimizing the calibration. To stablish the validation criteria, the study has focus on the evolution of the measure in both ways, the transition between setpoints and the stabilization of it. 4. Problem of validation of setpoints. To identify and optimize the range of validity of a measure required in tests, it is necessary to calibrate the measuring instrument. This means that it has to be discarded the nominal coefficients applied in the function relating the units of the instrument with the magnitude of measure and it has to be use those obtained in the calibration. Therefore, the calibration aim is double, obtain the relationship between the instrument unit and magnitude and obtain the measurement uncertainty. For doing it, it is necessary to acquire a number of samples per setpoints, that depending on the quantity, will result on a more or less reliable calibration. In one hand, acquire many points will give greater reliability to measure, but on the other hand, it will increase the calibration time. The figure 3 represents the samples of indication of temperature for a simulation of a transition from 0ºC to 10ºC. Figure 3: Transition from 0ºC to 10ºC. In this figure can be seen that for a simulated temperature of 0ºC it is obtained a reading about -2ºC and for a simulated temperature of 10°C a reading of 8ºC is obtained. This result does not necessarily mean that the instrument is damaged but it has drifts in electronics so it needs a readjustment. This happens in major or minor degree in all the instruments of measure, and this is the reason why a period of validity the calibration of the instrument (coefficients + uncertainty) is applied. Calibration has two objectives: • Obtain the relationship between the instrument reading and magnitude measured. In that case the objective is to find the relationship between the number of converter A/D and the simulated temperature by a standard device which fixes the RTD PT100 equivalent resistance. The ratio is obtained by applying the least squares method, obtaining the coefficients for the regression line of the function relating the two values. Additionally the calibration residues will be obtained, they represent the deviation between the points obtained and the regression line. • Obtain the measurement uncertainty which will limit the validity of data. In tests, we have to give a range within which the measure will be found if made with the primary standard and with a probability of 95%. The uncertainty calculation is performed according to the metrological criteria established in GUM [1]. ETTC 2015– European Test & Telemetry Conference For both the coefficients and the uncertainty obtention is necessary to have a number of samples enough for giving reliability in calculation. The number of samples will depend, in one hand on the equipment accuracy and on the other by the number of setpoints required. In figure 4 the samples are shown before and after the transition from 0ºC to 10ºC is represented. Figure 4: Samples before and after the transition. In this graph it can be seen that there are two circumstances that will have influence in the measurement result: • A: This area represents the transition with overshoot and damping. • B: This area represents the stabilized signal, where only the instrument accuracy has influence. Accordingly, to validate the setpoint will be necessary to identify the samples acquired in the transition in order to discard them. Once it is identified the stable area, it must be acquired a number of samples to calculate the mean and standard deviation needed for the calculation. The area "B" represents the instrument accuracy, defined as the ability of the instrument to repeat the measurement result when it's applied to it the same amount of magnitude. In Figure 5 is shown with more detail the dispersion of measures for 10°C. Figure 5: Detail the dispersion of measures for 10°C. Figure 6 shows the dispersion for 10°C in instrument units, given in number of converter of 16 bits. Figure 6: Dispersion for 10°C in instrument units. Making a continuous acquisition and representing the probability function of the samples acquired it is obtained Figure 7. Figure 7: Probability function of the samples acquired. In Figure 7 it can be seen that the probability function follows a normal distribution, this means that it could be used the usual statistical tool. The problem arises with the number of samples necessaries, since a small number, may not be representative, while a high number will make the calibration slower. The number of samples used in Figure 7 is 106.000, knowing that they are acquired to 256 s/s, the validation time could reach 7 minutes. This result forced to seek a method to validate the setpoint more efficiently. 5. Validation criterial setpoint. To calculate the mean and standard deviation of the set point, we will use 100 samples, filling a shift register of 100 elements, so that all samples will pass from the first position to the last, after which it will be discarded. The register is divided into five blocks with 20 samples each. Each time a sample is entered and the displacement is performed, the mean and standard deviation for each group of 20 samples is calculated. The values are passed to a validation module where the data are evaluated, aiming to fulfill the validation criteria which identifies and discards transients and states where the measure has not been yet stabilized. In the Figure 8 we show the block diagram. ETTC 2015– European Test & Telemetry Conference Figure 8: Validation criterial. The validation criterion looks for a representative pack of samples of a stable measure. For it, it must be guaranteed in the analysed package the following: • The average value should not follow an increasing or decreasing function. • In each setpoint, the noise should be uniform. This criterion allows obtaining a significant sample pack and gives information about possible failures in the acquisition channel, such as thermal drifts or interferences that affects to the quality of the measure, and lead to discarding the module. 6. Calibration Calculations. All the channels are calibrated in the range from -200ºC to 540ºC. It will acquire 22 setpoints. Two of them correspond to the maximum and minimum range points, the rest of 20 are distributed uniformly between 90% and 10% of the range. The validation of setpoint is performed according to exposed in this article while the information is as followed: • Best Straight Line (BSL) coefficients Obtained according to the following expression: bmxy += Where: ( ) 2 11 2 1 11       − − = ∑∑ ∑ ∑∑ == = == n i i n i i n i n i ii n i ii xxn yxyxn m ( ) 2 11 2 1 111 2       − − = ∑∑ ∑ ∑∑∑ == = === n i i n i i n i n i iii n i i n i i xxn yxxyx b • Combined uncertainty of measurement (Uc) Calculated following the guidelines in the GUM. The expression is composed by the following contributions: 2 4 2 3 2 2 2 1 UUUUUc +++= Where: U1 is the uncertainty calibration of the standard device from its Certificate of Calibration ucc, corresponding to a normal distribution function or Gauss: 2 1 ccu U = U2 is the uncertainty from the samples; it is given by the experimental standard deviation of the mean, normal distribution function. To estimate the standard deviation of the data from a sample, i.e., a set of observations of a particular magnitude taken under the same conditions, the experimental standard deviation σ (x) is used: ( ) ( ) ( )1 2 1 − − = ∑= n xx x n i i σ The best estimate of experimental standard deviation, it is the experimental standard deviation of the mean: ( ) ( ) ( )1 2 1 2 − − == ∑= nn xx xU n i i σ U3 is the uncertainty calculated from the maximum error in absolute value obtained from the residues of the Best Straight Line (BSL), treated as a rectangular distribution function: 3 3 LMaxErrorBS U = U4 is the uncertainty due to the quantification error of converter of data acquisition system card, treated as a rectangular distribution function: ( ) 3 2 4 /tionErrorQuantizita U = • Expanded Uncertainty Is the result of multiply the Combined Uncertainty by a Coverage Factor K. ( )yUKU c×±= For a coverage factor K = 2 the probability of containing the true value is 95%. UYTrueValueUY +≤≤− [1] [2] [3] [4] [5] [6] [7] [8] [9] [11] [10] ETTC 2015– European Test & Telemetry Conference 5. Conclusion The increasing demand for trials as well as the high number of parameters required on them, force to find solutions that give the possibility of having available the results in a short of period of time, keeping the balancing between "Quality - Time - Cost". In the instrumentation of measures trials, it has been demonstrated the need to maintain metrological quality criteria for the serious consequences that can result for not limiting the validity of the data correctly. The consequences would be such that: • Not all manufactured equipment with same procedures have the same functionality. • Manufacturing process is more demanding and expensive than necessary. Taking into account that increasing the resources, would increase the cost exponentially, it is necessary to reduce the calibration time used, both data acquisition and analysis as well as in documentation generation. The automation of the calibration is mandatory, but not enough, we have to include validation criteria that allows to have a significant number of samples, with a low acquisition time. This criterion is applied to 192 conditioning and measurement channels of RTD’s PT100 temperature sensors. The comparative time is the following: • Manual calibration: 150 hours • Automatic calibration: 80 hours • Automatic calibration with validation algorithms: 45 minutes. 6. References [1] Guide to the expression of Uncertainty in Measurement, JCGM, 2008 [2] Quality Procedure DP-000-021, Airbus Defence & Space. Internal Document, 2009. [3] Standard CASA-1294, Airbus Defence & Space. Internal Document, 2014. 7. Glossary BSL: Best Straight Line. BIPM: Bureau International des Poids et Mesures. CEM: Centro Español de Metrología. DAS: Data Acquisition System. GUM: Guide to the expression of Uncertainty in Measurement. JCGM: Joint Committee for Guides in Metrology PTB: Physikalisch-Technische Bundesanstalt. NIST: National Institute of Standards and Technology. RTD: Resistance Temperature Detector.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Design and implementation of LAN-based real-time simulation system of high frequency communication -Rui Song, Daquan Li, Guangming Zhou and Guojiang Xia - Beijing Institute of Astronautical Systems Engineering - China For the purpose of testing the military high frequency communication software as well as teaching and training in the indoor environment, a LAN-based real time simulation system of high frequency communication is designed and implemented. According to the requirement, the architecture of the system is proposed. The solution and implementation to the key problems such as the real-time of the system, the capture and retransmission of valid IP- datagram and the processing of the large amount of data, are given detailed. In the system, the communication service based on IP-datagram between the computer terminals in LAN are delayed and lost as in the real high frequency communication, which provides a testing and training environment which is very close to the real high frequency communication. The system saves the cost greatly and satisfies the training requirement under information condition.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Characterization of the unavailability due to rain of an X band radar used for range safety at Kourou Space Center - Frédéric Lacoste, Jérémie Trilles and Clément Baron – CNES – France At 10.5 GHz, the attenuation due to rain may become significant, in particular for a tropical climate such as French Guyana’s one. For SatCom system design, the ITU-R Rec. P.618 is classically used for the purpose of the statistical characterization of the tropospheric power impairment. Nevertheless, it provides information for an average year which fits quite poorly the space launch context. Moreover, the ITU-R current model relies on pretty few tropical and low elevation links and its accuracy in these conditions may be questioned. Consequently, a local assessment of the tropospheric impairment on the RADAR path was carried out based on 7 years of rain RADAR data collected by Meteo France in Kourou. These RADAR data were converted first into rainfall rate using concurrent rain amount data collected by Meteo France at the same time and location. Then, rainfall rate was converted into rain attenuation considering realistic rain dropsize distribution and rain height. Attenuation due to gases was also added to compute total tropospheric impairment. Finally, monthly statistics of attenuation were computed for a sample set of real launches (Ariane, Soyouz, Vega) from Kourou. Thanks to this attenuation database based on real and consisting weather data, link budgets were drawn up for the three launchers for different trajectories. The typical parameters of a skin-echo X band RADAR were used in order to verify the ability of this kind of RADAR to ensure the CSG range safety in any season, trajectory and launcher considering fixed Radar Cross Section for each launcher

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference Channel capacity estimation of stacked circularly polarized patch antennas suitable for drone applications A. Ioannis Petropoulos1 , B. Jacques Sombrin1 , C. Nicolas Delhote1 , D. Cyrille Menudier1 1: SigmaLim-Labex, University of Limoges, 123 avenue Albert Thomas 87060, Limoges, France Abstract: In this paper, two low cost, circularly polarized stacked patch antennas operating at 2.4GHz, suitable for drone applications are presented. The one provides 700MHz of bandwidth and 5dB of gain while the second one has 450MHz of bandwidth and 7.5dB gain. In addition both antennas are evaluated in terms of channel capacity for two scenarios of drone flight based on the French legislation. Channel capacity results are depicted and commented. Keywords: patch antenna, circular polarization, channel capacity 1. Introduction Drones have attracted significant attention last years since it is a low cost solution for providing services such as surveillance of regions, reconnaissance of objects, photo shooting, videotaping of events etc. They can be used in a wide variety of applications from agricultural activities and monitoring of areas to border patrol and mapping of places difficult to reach. In addition sometimes they are utilised by the authorities to transmit sensitive and confidential data. So it is evident that drones have to be equipped with a low profile, flexible and robust antenna that will ensure reliable communication with the ground node. Several types of antennas have been presented in bibliography for overcoming several communication problems. In [1] a high directive planar patch array is presented to overcome the constraints of high flight altitude of UAV and limited GCS coverage. In addition a 2.4GHz antenna with toroidal radiation pattern has been presented in [2] while in [3] a low profile, L-shaped monopole is described with vertical polarization. Besides the antenna issue, special attention has been given to the communication link performance. In [4] a software layer has been developed to establish a reliable link between drone and Base Station and in [5] a strategy for a flying relay between two mobile ground nodes has been studied. Moreover the link performance has been assessed for the case of different antenna orientation on UAV and GCS [6]. In this paper, the present state of the art of the communication between drone and ground node is described where a dipole antenna is used from the side of the drone and aYagi-Uda antenna from the side of GCS. Next, two low cost, low profile, stacked patch antennas are presented to be mounted to a rotary-wing drone. The proposed antennas are circularly polarized so that they provide efficient communication with the ground node, regardless the orientation and movement of the drone. In addition a channel capacity estimation is carried out for the two proposed antennas for two different scenarios of drone flight based on the French legislation. Channel capacity is depicted in terms of horizontal distance between the drone and the ground node and commented. The paper is organised as follows: In section 2, the present state of the art of communication between the drone and the GCS and a novel concept of communication are depicted and described, presenting also two antennas suitable for drone applications. In section 3 channel capacity evaluations take place using the antennas described in the previous section. Finally in section 4, some conclusions are denoted regarding the antenna features. 2. Drone antennas 2.1 Drone communication - Present state of the art The present technology comprises a dipole antenna at the side of the drone and a Yagi-Uda antenna at the side of GCS. Figure 1 depicts the existing way of drone transmission. Figure 1: Present way of communication link between a rotary-wing drone and the GCS A channel capacity evaluation of the communication link presented in figure 1 is carried out and results are depicted in secion3. ETTC 2015– European Test & Telemetry Conference 2.1 Proposed concept of communication The antennas described in this study are suitable for drone applications according to figure 2 which depicts the communication between a rotary-wing drone and the GCS. Figure 2: Communication link between a rotary-wing drone and the GCS A patch antenna is mounted on the fuselage of the drone producing a main lobe of radiation that points always normal to the ground while the GCS is also equipped with a similar antenna with fixed orientation. 2.2 Antenna designs A 90deg HC designed on FR4 substrate is used to excite a square patch antenna with notches placed upon a foam layer. The configuration of the antenna is depicted in figure 3. Figure 3: HC patch antenna configuration; (a) Simulation design (b) Cross-section The hybrid coupler is a microstrip circuit described in [7] where the signals exiting the output ports are 90deg out of phase. Also the square patch has two notches at two of his edges and is fed by the hybrid coupler through two orthogonal slots designed in the ground plane, as can be seen in figure 1a. The 90deg hybrid coupler in combination with the two orthogonal slots designed on the ground plane and the notches are introduced in the design for achieving circular polarization [8]. The introduction of FR4 substrate with foam layer maintains low cost and also provides bandwidth enhancement at the 2.4GHz frequency band. In addition a slotted circular patch antenna is designed and depicted in figure 4. Figure 4: Slotted circular patch antenna; (a) Simulation design (b) Cross-section In this case the desired circular polarization is achieved by introducing notches and a slot in the patch. The antenna is excited by electromagnetic coupling through another slot designed on the ground plane. Once again the composite structure of FR4 and foam layer, provides enhanced bandwidth over the 2.4GHz frequency band. 2.3 Antenna features The S11 parameter of the described antennas is depicted in figure 5. Figure 5: S11 parameter of the proposed antennas The HC patch antenna provides a bandwidth of 700MHz (S11<-10dB) and resonance at 2.42GHz where S11=-22dB, (a) (b) (a) (b) notches antenna radiation pattern antenna radiation pattern s h d Drone GCS ETTC 2015– European Test & Telemetry Conference at 2.6GHz where S11=-24dB and 2.92GHz with S11=-20dB. The circular slotted patch antenna has a bandwidth of 400MHz and resonance at 2.36GHz where S11=-30dB.The 2D radiation patterns of the presented antennas are depicted in figure 6. Figure 6: 2D radiation pattern; (a) HC patch antenna (b) Circular slotted patch antenna Gain and Half-Power Beam-Width for both antennas are denoted in table 1. Table 1: Antennas features Antenna Gain (2.4GHz) HPBW HC patch antenna 5 dB 60deg (xz plane) 64deg (yz plane) Circular slotted patch antenna 7.5 dB 68deg (xz plane) 62deg (yz plane) More efficient coupling between the feeding line and the circular slotted patch antenna leads to higher gain compared to the case of the HC patch antenna which is fed by the more complex and susceptible to losses hybrid coupler. On the other hand HC patch antenna provides less HPBW in xz plane. The polarization characteristics of the studied antennas are also examined in terms of axial ratio. Results are depicted in figure 7. Figure 7: Axial ratio of the proposed antennas Both antennas are circular polarized (AR<3dB) within the WiFi 2.4GHz frequency band that is from 2.4GHz to 2.5GHz. 3. Channel capacity evaluations In order to evaluate the channel capacity of the communication between the drone and the GCS, it is necessary to calculate the link budget, at first place. Link budget is given by the equation [9]: PR=PT+GT-LT-LFS+GR-LR-LM [1] where: PR in dBm is the power at the receiver (link budget) PT in dBm is the transmitted power GT in dB is the antenna gain of the transmitter LFS in dB is the free space path loss GR in dB is the antenna gain of the receiver LR in dB are losses at the receiver considered to be 1dB LT in dB are losses at the transmitter considered to be 2dB LM in dB are losses due to polarization mismatch and fading considered to be 2dB Free space loss is calculated by the formula: LFS=20log(d)+20log(f)+92.45 [2] where: d in km is the distance f in GHz is the frequency Onwards for the evaluation of the channel capacity, it is required to evaluate the SNR and the noise at the receiver. SNR is calculated from the equation: SNR=PR-NR [3] where: PR in dBm is the received power NR in dBm is the noise at the receiver The noise at the receiver is calculated using the formula: NR=10log(KTB) [4] where: K in JK-1 is the Boltzmann's constant K=1.23x10-23 JK-1 T in deg of Kelvin is the temperature B in MHz is the bandwidth at the receiver. A 22MHz channel bandwidth is considered based on the specifications of the Wi-Fi 2.4GHz frequency band. Finally the channel capacity is obtained by the equation: C=Blog2(1+SNR) [5] where: (a) (b) Gain xz plane Gain yz plane ETTC 2015– European Test & Telemetry Conference B in MHz is the bandwidth at the receiver. Here also a 22Mhz bandwidth channel is considered. SNR is the Signal to Noise Ratio in dBm For the evaluation of channel capacity, two scenarios have been considered based on the French legislation for drone flights [10]. The first assumes 50m of drone altitude and 1km of maximum horizontal distance between the drone and the GCS while the second one considers 150m of altitude and unlimited distance. However in this study a maximum distance of 5km between drone and GCS has been considered. Channel capacity evaluations have been carried out for the first scenario (drone height of 50m, 1km maximum horizontal distance), considering a dipole antenna for the side of the drone. A 5dB gain 2.4GHz dipole antenna is taken into account (figure 1). Channel capacity results are depicted in figure 8. Figure 8: Channel capacity for dipole antenna use It is evident from figure 8 that for specific values of horizontal distance s, the communication efficiency in terms of channel capacity is very poor. The link efficiency is satisfactory for distances s from 10m to 500m, but for higher values of s, channel capacity becomes very limited. Let us mention here that the required data rates for supporting HD video transmission range from 11Mb/s to 54Mb/s for the 2.4GHz frequency band. In addition channel capacity evaluations have been performed assuming that the drone and GCS antenna gain is 5dB (HC patch antenna). Moreover a channel capacity evaluation has been obtained considering drone and GCS antenna gain of 7.5dB (Slotted circular patch antenna). Calculations based on the formulas described in this section gave results depicted in figure 9. Figure 9: Channel Capacity evaluation; (a) scenario of 50m of drone altitude and 1km distance from GCS (b) scenario of 150m of drone altitude and 5km of distance from GCS The horizontal axis of figures 9 denotes the horizontal distance between the drone and the GCS which is also denoted at figure 1. According to figure 9a, the HC patch antenna provides a maximum of approximately 250Mb/s when the drone is exactly over the GCS. When moving away from it, the capacity is decreased, reaching a value of 9.5Mb/sec at 1000m. The circular slotted patch antenna presents better performance starting from 280Mb/s at s=0m and reaching 22Mb/s at s=1000m, due to the fact that a higher gain antenna is utilized. Taking into consideration that the Wi-Fi 2.4GHz can support HD video with data rates of 11Mb/sec to 54 Mb/s [11], it is evident that both antennas can be used. Figure 8b shows that the HC patch antenna provides a maximum capacity of 204Mb/s which is then exponentially decreased approaching 270kb/s at s=5000m. In addition the circular slotted patch antenna has a maximum of 230Mb/s and then capacity is decreased to reach 540kb/s at s=5000m. In this case the maximum achieved capacity for s=0m is less than that of the previous scenario because now the drone 's altitude is significantly increased and the signal arrives at the GCS degraded. The HC patch antenna is possible to support high quality communication for a distance no more than 1900m while the circular slotted patch antenna is efficient for a distance that reaches 2400m. (b) (a) ETTC 2015– European Test & Telemetry Conference 5. Conclusion In this paper two patch antennas suitable for drone applications were presented, operating at Wi-Fi 2.4GHz frequency band. The HC patch antenna provides 700MHz of bandwidth and 5dB gain while the circular slotted patch antenna has a bandwidth of 400MHz and 7.5dB of gain. Both antennas are circularly polarized in order to ensure the continuous and reliable communication link between the drone and the GCS. In addition the channel capacity evaluations were carried out, using the proposed antennas and considering two scenarios of drone flight, based on the French legislation. These evaluations showed that both antennas are suitable for providing HD video for the scenario of 50m of drone altitude and maximum distance of 1000m to the GCS. For the second scenario (150m of drone altitude and 5000m of maximum distance to GCS), the HC patch antenna is appropriate to use for a distance up to 1900m while the circular slotted patch operates efficiently in a range of 2400m. Moreover the proposed antennas have planar geometry, low profile and can be easily be mounted on the fuselage of the drone. Both antennas oversubscribe the 100MHz bandwidth of the Wi-Fi 2.4GHz (2.4GHz-2.5GHz) due to the multi-layer stacked configuration. The utilization of a FR4 superstrate not only protects the antenna from wear, but also contributes to gain and efficiency enhancement as proved in [12]. The 50Ω microstrip feed lines used in the antenna designs facilitate the integration with other microstrip circuits with a minimum of loss. The designs provide enough gain 60deg to 68deg of HPBW so signal is sent towards one direction instead of spreading radiation over all directions as in the case of dipole antennas. In this way the information is more secured and protected in case of transmission of sensitive data. Both patch antennas presented in this study outweigh the performance compared to the dipole antenna as they both provide adequate channel capacity covering a horizontal distance of 1km while the dipole antenna operates efficiently for a limited horizontal distance from 10m to 500m. 6. References [1] P. Park, S. Choi, D. Lee, B. Lee, "Performance of UAV (Unmanned Aerial Vehicle) communication system adapting WiBro with array antenna", 11th International Conference on Advanced Communication Technology, Phoenix, USA, 2009. [2] N.M. Boev, "Design and implementation antenna for small UAV", International Siberian Conference on Control and Communications, Krasnoyarsk, Russia, 2011. [3] Z. Liu, Y.Zhang, Z. Qian, Z. P. Han, W. Ni, "A Novel Broad Beamwidth Conformal Antenna on Unmanned Aerial Vehicle", IEEE Antennas and Wireless Propagation Letters, Vol.11, 2012. [4] J.P. Bodanese, G.M. de Araujo, C. Steup, G.V. Raffo, L.B. Becker, "Wireless Communication Infrastructure for a Short-Range Unmanned Aerial", 28th International Conference on Advanced Information Networking and Applications Workshops", Victoria BC, USA, 2014. [5] C. Ben Moussa, F. Gagnon, O. Akhrif, S. Gagne, "Aerial Mast vs Aerial Bridge Autonomous UAV Relay: A Simulation-Based Comparison", 6th International Conference on New Technologies Mobility and Security, Dubai, 2014. [6] C.M. Cheng, P.H. Hsiao, H.T. Kung, D. Vlah, "Performance Measurement of 802.11a Wireless Links from UAV to Ground Nodes with Various Antenna Orientations", 15th International Conference on Computer Communications and Networks, Arlington, VA, USA, 2006. [7] K.-K.M. Cheng, Fai-Leung Wong, "A novel approach to the design and implementation of dual-band compact planar 90° branch-line coupler", IEEE Transactions on Microwave Theory and Techniques, vol.52, no.11, 2004. [8] Steven (Shichang) Gao, Qi Luo, Fuguo Zhu, "Circularly Polarized Antennas", First Edition, John Wiley & Sons, Ltd., 2014. [9] John S. Seybold, "Introduction to RF Propagation", Chapter 4, Page 66, John Wiley & Sons, INC., 2005. [10] http://www.legifrance.gouv.fr /affichTexte.do?cidTexte=JORFTEXT000025834986 &dateTexte=&categorieLien=id [11] Mazliza Othman, "Principles of Mobile Computing and Communications", Chapter 4, Page 77, Auerbach Publications, 2007. [12] N. Alexopoulos, D.R. Jackson, "Fundamental superstrate (cover) effects on printed circuit antennas", IEEE Transactions on Antennas and Propagation, vol.32, no.8, 1984. 7. Glossary HC: Hybrid Coupler GCS: Ground Control System AR: Axial Ratio SNR: Signal to Noise Ratio HD: High Definition

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

ETTC 2015– European Test & Telemetry Conference A Gaussianization-based performance enhancement approach for coded digital PCM/FM Guojiang Xia, Xinglai Wang, Kun Lan Beijing Institute of Astronautical Systems Engineering, Nandahongmen Road 1, Beijing, China Abstract: The BER performance of the coded digital PCM/FM telemetry system is depended on the accuracy of the input likelihood metrics, which are greatly influenced by the click noise. This letter presents a Gaussianization approach to weaken the influence of the click noise. The outputs of the limiter/discriminator are first modeled by a Gaussian mixture model, whose parameters are estimated by expectation maximization algorithm, and then the amplitudes are adjusted by a proposed Gaussianization filter so that they become more accurate as likelihood metrics. When (64, 57)2 TPC is applied, simulation results show the coding gain is 0.8 dB at 10-4 BER level. Keywords: PCM/FM, limiter/discriminator, Gaussianization, turbo product codes, LDPC 1. Introduction Pulse coded modulation/frequency modulation (PCM/FM), which has the advantages of anti-flame, anti-polarization anti-multipath fading and anti-phase interference, is a commonly deployed technique in a variety of telemetry areas and other applications. Many forwards error-correction (FEC) codes, such as convolution codes, Reed-Solomon (RS) codes, turbo product codes (TPC) and low density parity check (LDPC) codes [1-4], are employed to enhance the bit error rate (BER) performance of the digital PCM/FM telemetry system. Because of its simplicity, limiter/discriminator (L/D) is often used for the demodulation of digital FM system. However, it is well known that the noise in the demodulated signal at the output of the L/D becomes impulsive when the carrier-to-noise power ratio decreases below about 10dB, even the channel is additive Gaussian white noise (AWGN) channel. The most famous description of this kind of noise was proposed in 1963 by S. O. Rice [5] who regarded the noise as the sum of two related component: approximate Gaussian noise and a kind of impulsive noise, namely the so-called click noise. The performance of all soft-input and soft-output (SISO) decoding algorithms, such as the Viterbi decoder, the Chase decoder and the belief propagation decoder, depend on the accuracy of the soft-input likelihood metrics. The accurate likelihood metrics are easy to be obtained when the amplitude distribution of the noise in the soft-input signal is known, for example, Gaussian noise. However, if the amplitude distribution of the noise is unknown or hard to know, the accurate likelihood metrics are hard to get. In practical coded PCM/FM system, the decoders of the FEC codes adopt Gaussian assumption for the sake of simplicity. This is not optimal for the click noise background and decreases the SISO decoding performance. Many efforts which are reported in [6] have been done to improve the SISO decoding performance of the coded digital FM telemetry system with L/D. Most of them are based on Rice’s click model and intend to detect and eliminate the click noise. However, it is very difficult to eliminate the click noise completely because of its randomness. Even if the click noise is eliminated completely, the performance of the Gaussian assumption SISO decoder is still not good because the approximate Gaussian noise component in Rice’s model is essentially non-Gaussian. In this letter, the noise in the signal at the output of the L/D is not regarded as the sum of the approximate Gaussian noise and the click noise, as described in Rice’s model, but is regarded as a kind of non-Gaussian noise as a whole. A Gaussianization approach is proposed to convert the probability distribution of the non-Gaussian noise to be closer to Gaussian, so that the likelihood metrics obtained by the SISO decoders in the digital PCM/FM system with L/D become more accurate and the BER performance of the system is improved. The reminder of this letter is organized as follows. Section II gives a brief review on the coded digital PCM/FM telemetry systems with L/D. In section III the Gaussian mixture density (GMD) model and the expectation-maximization (EM) algorithm as well as the Gaussianizing filter which are used in the Gaussianization scheme are introduced. Section IV presents the proposed Gaussianization approach in the coded digital PCM/FM telemetry systems with L/D. Section V gives the simulation results of the proposed approach, when the TPC codes and LDPC are employed as FEC codes. Finally conclusions are drawn in section VI. 2. Review on coded digital PCM/FM system In this section, the coded digital PCM/FM telemetry system with L/D is reviewed. The model of the considered system is shown in Figure 1. Figure 1: Model of PCM/FM telemetry system ETTC 2015– European Test & Telemetry Conference In the coded digital PCM/FM telemetry system, the telemetry data is first encoded by a PCM encoder and then is encoded by a FEC encoder. The FEC encoder can be an encoder of RS codes, TPC codes or LDPC codes. The pre-filter in this system is used to eliminate the inter-symbol interference and improve the band efficiency. L/D is adopted to be the demodulator, which is followed by a FEC SISO decoder. The modulated FM signal SFM(t) is:    cosFM c FMS t A t K f t dt     [1] Where f(t) is the modulate signal which is the output of the pre-filter, ωc is the carrier frequency, A is the carrier amplitude, and KFM is the frequency deviation constant which is decided by the hardware circuit. The structure of the L/D is as depicted in Figure2:  iS t  dS t  oS t Figure 2: Model of PCM/FM telemetry system The L/D is composed of a differentiator and an envelope detector. The input signal of the differentiator is SFM(t) in equation [1], and the output of the differentiator Sd(t) is:      sind c FM c FMS t A t K f t t K f t dt         [2] From equation [2] it is known that Sd(t) is the differentiation of SFM(t). The envelope of Sd(t) is proportional to f(t). The output signal of the envelope detector excluding the direct current is:    o d FMS t K K f t [3] Where Kd is a constant decided by the L/D circuit. From equation [3] it is known that f(t) can be recovered. 3. Gaussianization approach The Gaussianization approach is an important technique in many signal processing areas and detection areas, such as non-Gaussian autoregressive process [7] and speech processing. In these areas, the noise background is usually assumed to be Gaussian for the sake of simplicity. However, a wide variety of signal probability distributions are non-Gaussian. The mismatch between the assumption and the actual distribution results in a poor performance of the match filter, correlation test, maximum likelihood decoding, etc. However, utilizing the statistics characteristic of the non-Gaussian background, the probability distribution of them can be converted to be more “Gaussian-like”. In other words, the amplitudes of the background noise are adjusted so that the probability distribution becomes more similar to Gaussian distribution than before. Then the performance can be improved. The procedure of a typical Gaussianization approach is as follows. First the probability distribution function of the non-Gaussian background should be fitted by a non-Gaussian probability model [7]. Then the parameters of the non-Gaussian probability model should be determined by the parameter estimation algorithms [8]. Finally, according to the estimated parameters, the amplitudes of the non-Gaussian background are adjusted by a Gaussianization processing module so that the probability distribution is Gaussianized. There are many non-Gaussian probability models, such as the Gaussian mixture density (GMD) model, the class-A model, the K-distribution model and so on [7]. The GMD model, which is an effective model for fitting varieties of non-Gaussian probability distributions, is adopted in this letter:     1 1 | , , 1 M M i i i i i i i f s f s         [4] Where s is the signal with non-Gaussian background noise, f(s) is the GMD model of s, fi(s) are the Gaussian probability distribution functions with different mean i and variance i, M is the order of the GMD model, namely the number of fi(s), and i is the mixture parameter which denotes the relative weighting of each fi(s). M corresponds to the statistics characteristic of s. The bigger the M is, the more accurate the GDM model is. However, big M means high computational cost. Therefore the value of M is usually a trade-off between the computational cost and the accuracy in practice. In this letter M is set to be 2, which is the simplest case. Therefore the GDM model is a 2-order one:        | , 1 | ,B B B I I If s f s f s        [5] Where fB(s) is a Gaussian probability distribution function with mean B and variance B, and fI(s) is a Gaussian probability distribution function with mean I and variance I. Compared with Rice’s click model, fB(s) describes the random property of the non-Gaussian background noise, while fI(s) describes the impulsive property of the non-Gaussian background noise. Obviously the parameter group g = [, , ] of the GDM model should be decided by the statistics characteristic of s. There are many estimating approaches that can be used to get the parameter group g, for example, the expectation maximization (EM) algorithm[8], the penalized maximum likelihood estimation algorithm, the indirect least squares estimation algorithm for cumulant generating function, and so on [9]. As a widely used and high efficiency algorithm, EM algorithm is adopted to be the parameter estimation algorithm in this letter. EM algorithm is an iterative algorithm, which needs initial values of parameter group g. The initial value of g can be set by experience or according to the statistic characteristic of s. The Gaussianization processing module in this letter is the so-called Gaussianizing filter, which is proposed in [9]. The function of a Gaussianizing filter is to adjust the amplitudes of the input signal according to the estimated parameters obtained by the estimation algorithm, namely strengthen the smaller and weaken the bigger, so that the amplitudes distribution of the output signal becomes closer to Gaussian. Two Gaussianizing filters, U-filter and G-filter, are mentioned in [9]. The Gaussianizing filter proposed in this letter is a revised version of the G-filter, which is discussed in detail in next section. It should be noticed that after the Gaussianization approach the information in the original signal should be remained, or the performance is still poor. In this letter, the similarity ETTC 2015– European Test & Telemetry Conference between the signals before and after the Gaussianization approach is evaluated by the traditional concept of correlation coefficient, which is defined as follow:        1 2 2 1 1 N i m i m i xy N N i m i m i i x x y y r x x y y            [6] Where x and y denote the signals before and after the Gaussianization approach, and xm and ym are the mean values of them respectively. The value of rxy is in the range of [-1, +1]. The larger the rxy is, the more similar the two signals are. When rxy is 1, the two signals are exactly the same. When rxy is 0, the two signals are independent from each other. From this point of view, if the rxy is high, it can be considered that most information is kept after the Gaussianization approach. 4. The proposed Gaussianzation scheme The traditional Gaussianzation approaches mentioned in the previous section are all applied in signal processing areas and detection areas so far. To the best of our knowledge, the idea of the Gaussianzation approach has not been applied in the SISO decoding algorithm. In this letter a Gaussianzation approach is proposed to improve the BER performance of the SISO decoder in the coded digital PCM/FM telemetry system with L/D. The idea is based on the following fact. Because of the discriminator in the L/D, the noise in the demodulated signal which is the output of the L/D is non-Gaussian, even if the noise in channel is additive white Gaussian noise. However, the likelihood metrics in the traditional SISO decoding algorithm, such as the Chase decoding algorithm, the belief propagation decoding algorithm, etc, all adopt the Gaussian background assumption. The mismatch between the actual distribution of likelihood metrics and the Gaussian assumption causes the inaccurate likelihood metrics, which results in a poor decoding BER performance. For the sake of clarity, in the following the demodulated signals refer to the output signals of the L/D which are the sum of the useful signal and the non-Gaussian noise. Since it can adjust the amplitudes of the demodulated signals, the Gaussianzation approach is adopted so that the distribution of the demodulated signals becomes closer to Gaussian, resulting in more accurate likelihood metrics and better SISO decoding BER performance. Figure 3 shows the proposed Gaussianization module in the receiver of the coded digital PCM/FM telemetry system. Figure 3: Block diagram of Gaussianization module in the coded digital PCM/FM receiver In the proposed Gaussianization module, the probability distribution of the demodulated signals is approximated by the 2-order GDM model in (5). The EM algorithm is adopted to estimated the parameter group g = [, , ] of the 2-order GDM model. The initial setting of g is crucial to the final results of EM algorithm. The algorithm will converge to local maximum value if the initial setting is inappropriate. In experiments, the following initial setting is appropriate: the initial mean 1 and 2 are set to be the mean of the input signals of the Gaussianization module and the amplitude of the baseband data; the initial variance 1 and 2 are set to be the variance of the input signals of the Gaussianization module and 1; the initial mixture parameter  is initialized by 0.5 and 0.5. After a fixed number of iterations (50) or terminated by the stopping criteria, the EM algorithm gives the estimated parameter group g. The Gaussianizing filter proposed in this letter is the normalized G-filter (NG-filter):   1 1 1 1 ' ' ' | ' ' ' ' M i i i i NG M i i i i s f s g m                                     [7] Where fNG(s|g’) are the outputs of the NG-filter,  (x) is the standard Gaussian cumulative distribution function, -1 (x) is its inverse function, g’=[’, ’, ’] are the estimated 2-order GDM model parameters obtained by the EM algorithm, s are the input signals of the Gaussianization module and m is the amplitude of the baseband data. A normalization term is added as the denominator compared with the G-filter in [9]. Because of the normalization term, the outputs of the NG-filter become the representations of the relative magnitude to the amplitude of the baseband data. Therefore, these outputs can be fed into the SISO decoder as more accurate likelihood metrics. Almost all the computational cost of the proposed Gaussianzation approach concentrates on the EM algorithm. The bigger the number of iterations, the larger the computational cost is. In fact, not a very large number of iterations, for example 50 iterations as adopted in this letter, can produce exact estimations. In practical telemetry system, the Gaussianization module can be realized by hardware or software. It can be an optional module for the performance enhancement in the telemetry system, which is in the place between the L/D and the SISO decoder. 5. Simulation results As mentioned in section III, the correlation coefficient is used to characterize the similarity of the signals before and after the Gaussianization module. In simulation the average value of the correlation coefficient is 0.9879, which means almost all information is kept while the amplitudes have been adjusted. Simulation results are provided in this section to show the improved decoding BER performance of the proposed Gaussianzation approach. The FEC codes adopted in this letter are TPC codes and LDPC codes. The adopted TPC codes are TPC(64, 57)2 and TPC(32, 26)2 , which has ETTC 2015– European Test & Telemetry Conference extended Hamming codes as their component codes [10]. Both the two kinds of TPC codes have been chosen as FEC codes in PCM/FM telemetry system [3]. The adopted LDPC code is (8160, 7136) code, which is a suggested FEC code by the Consultative Committee for Space Data Systems (CCSDS) [11]. The simulation system is built by MATLAB. The carrier frequency of FM is 80 MHz, and the baseband data rate is 10 Mbps. The max frequency deviation coefficient is 0.35. The BER performance comparison in TPC coded PCM/FM system with L/D, as well as the performance of the digital PCM/FM with L/D is presented in Figure 4. (a) (b) Figure 4: Comparison of BER performance with and without Gaussianization approach in TPC coded PCM/FM with limiter-discriminator: (a) TPC(64, 57)2 , (b) TPC(32, 26)2 The SISO decoding algorithm of both two TPC codes in Figure 4 is the Chase II algorithm with 8 iterations [10]. For the BER of 10-4 level, the needed Eb/N0 of the TPC without Gaussianization approach are about 9.8dB (TPC(64, 57)2 ) and 10dB (TPC(32, 26)2 ), while that of the TPC with Gaussianization approach is 9dB(TPC(64, 57)2 ) and 9.5dB (TPC(32, 26)2 ), which yields 0.8dB and 0.5 dB coding gain respectively. The BER performance comparison of the LDPC coded PCM/FM system with and without Gaussianization approach is presented in Figure 5. The SISO decoding algorithm of LDPC code is the minimum-sum algorithm, with alpha being 1.25 and beta being 0. The number of iterations is 50, and the quantization mode is 1-3-4 [11]. For the BER of 10-4 level, the needed Eb/N0 of the LDPC without Gaussianization approach are about 9.6dB, while that of the LDPC with Gaussianization approach is 9.2dB, which yield 0.4dB coding gain. Figure 5: Comparison of BER performance with and without Gaussianization approach in LDPC coded PCM/FM with limiter-discriminator 6. Conclusion In this work, a novel Gaussianization approach is proposed to improve the BER performance of the SISO decoder in the digital PCM/FM telemetry system with L/D. The simulation results show that a coding gain of about 0.8dB at 10-4 BER level has been achieved when the employed FEC code is TPC. The proposed approach can easily extend to digital PCM/FM telemetry system with other kinds of FEC codes which employ SISO decoding algorithms, such as convolution codes and Turbo codes, which is our future work. 7. References [1] R. F. Pawula: “Improved Performance of Coded Digital FM”, IEEE Transactions on Communications, Vol. 47, No. 11, pp. 1701–1708, 1999 [2] David Taggart, Rajendra Kumar, Nick Wagner, Yogi Krikorian, Charles Wang, Neal Elyashar, Mel Cutler, Christine Stevens: “PCM/FM performance enhancement using Reed Solomon channel coding”, IEEE Aerospace Conference Proceedings, pp. 1337–1346, 2003 [3] M. Geoghegon: “Experimental results for PCM/FM, Tier 1 SOQPSK and Tier 2 multi-h CPM with turbo-product codes”, Proc. Int. Telemetry Conf., (Las Vagas, NV), 2003 [4] Lin Wang, Guanrong Chen: “Using LDPC Codes to Enhance the Performance of FM-DCSK”, the 47th IEEE International Midwest Symposium on Circuits and Systems, pp. I-401–I-404, 2004 [5] S. O. Rice: “Noise in FM receiver”, Time Series Analysis, M. Rosenblatt, Ed. New York, Wiley, pp. 395–422, 1963 [6] L. Kouwenhoven, M. Verhoeven and Van Roermund: “A new simple design model for FM demodulators using soft-limiters for click noise suppression”, IEEE International Symposium on Circuits and Systems, pp. 265-268, 1997 ETTC 2015– European Test & Telemetry Conference [7] Yunxin Zhao, Xinhua Zhuang, and Sheu-Jen Ting: “Gaussian mixture density modelling of non-Gaussian source for autoregressive process”, IEEE Transactions on Signal processing, Vol. 43, No. 4, pp. 894–903, 1995 [8] M. Verbout, James M., T. Ludwig, and Alan V. Oppenheim: “Parameter estimation for autoregressive Gaussian-mixture processes: the EMAX algorithm”, IEEE Transactions on Signal processing, Vol. 46, No. 10, pp. 2744–2756, 1998 [9] Wang Pingbo, Cai Zhiming, Liu Feng and Tang Suofu: “G-Filter's Gaussianization function for interference background”, 2010 International Conference on Signal Acquisition and Processing, pp. 76–79, 2010 [10] R. M. Pyndiah: “Near-optimum decoding of product codes: block turbo codes”, IEEE Transactions on Communications, Vol. 46, No. 8, pp. 1003–1010, 1998 [11] Low Density Parity Check Codes For Use In Near-Earth And Deep Space Applications. CCSDS 131.1-O-2, Orange Book, September 2007

Situation

Centre des Congrès Pierre Baudis, Toulouse (France)

11, esplanade Compans Caffarelli
31000 Toulouse - France