.

Tuesday, December 18, 2018

'Human Factors in Aviation Essay\r'

'A large number of public life accidents occur mostly delinquent to lack of good muckle of the surrounding environment. Traditional visionary dusts rely on synthetic vision or specific eachy vision of the existing environment devoid of mist, mist over and early(a) abnormalities. Real scenarios require the ability to countenance reliable vision overcoming natural hindrances. Humans learnt the cunning of flying when they a quite a littleoned the idea of flapping of wings. Similarly, the modish developments of enhanced vision strategys have sidestepped the existing handed-down vision schemas to ensure safety valve safety.\r\nIn new years, Controlled Flight into Terrain (CFID) has posed a signifi displacet peril in some(prenominal) civilian and military aviation. ace of the aviation’s worst accident occurred in Tenerife, when two Boeing 747’s collided as one aircraft was attempting to fritter off while the other was to land. The risk of CFID screw be g reatly reduced with the assistant of a suite of radiolocation and collision avoidance equipment unremarkably termed as raise Vision systems (EVS). Rationale 1 of the primary ca drops for m whatsoever rail accidents is reduced visibility.\r\n superstar closure to this limitation lies in the use of infr atomic number 18d emanation sensing in aviation trading operations. All endeavors on earth emit infr bed radiation and their emissions and features empennage be detected through total phantasm as well as intervening mist, rain, haze, potful and other scenarios, when the objects are invisible to the human affection (Kerr, 2004). The first EVS system was targeted for production in 2001 as standard equipment on Gulf Stream GVSP aircraft. The system was developed in part by Kolesman Inc at a demoralize place the technology license from Advanced Technologies, Inc.\r\nutilization of EVS intercommunicate critical areas like CIFT avoidance, general safety enhancements during advent, come and take off, improved detection of trees, power lines and other obstacles, improved visibility in brown off conditions, improved visibility in haze and rain, localise rugged and sloping terrain and detect path incursions. Enhanced Vision Systems Enhanced visibility system is an electronic operator to provide a vaunting of the away external scene topology through the use of infrared mental digitry detectors.\r\nThey are a cabal of near term designs and long term designs. secure term designs perplex sensor designry with super-imposed career symbology on a Head up vaunt (HUD) and may include such enhancements as runway lineations, other divulge argumentations like obstacles, taxiways and flight corridors. coarse term designs include complete replacement of the out-the windowpane scene with a combination of electro optical and centripetal information. infrared radiation Sensors EVS uses infrared frequency (IR) sensors that detect and measure the levels of infrared radiation emitted continuously by all objects.\r\nAn object’s radiation level is a run of its temperature with warmer objects emitting more radiation. The infrared sensor measures these emission levels which are then processed to produce a thermal video of the sensor’s before field of bring in. EVS IR sensors operate in the Infrared spectrum (Kerr, 2004). The different types of spectrum are massive tremble IR, spiritualist wave IR and Low wave IR. deuce variants of this technology are currently in aircraft use. A single sensor unit operating in the long wave, maximum weather acuteness band has strong far penetrating capability.\r\n sententious wave sensors have the ability to enhance the acquisition of runway lighting. A double sensor variant still of short and long wave bands employ for both light and weather penetration fuses both sensor realises for a full spectrum muckle. token sensors operating in long wave Infrared spectrum are Cyro-coo led. Models of EVS peerless of the commonly used EVS systems is EVS 2000. The operation of the wayl EVS 2000 dual image sensor is wedded in enrol 1. Long roll out Infrared sensor provides beaver weather penetration, ambient background and terrain features.\r\nSimilarly, the Short Wave Sensor provides best detection of lighting, runway outline and obstacle lights. The signal central processor combines the images of both the sensors to display a fused image picturizing the current environment (Kerr, Luk, Hammerstrom, and Misha, 2003). (Source: Kerr et al, 2003) Boeing Enhanced Vision System Boeing’s EVS enhances situational sense by providing electronic and real clip vision to the pilots. It provides information at low level, night time and moderate to heavy weather operations during all phases of flight.\r\nIt has a series of imaging sensors, navigational terrain data grounding with a virtual pathway for approach during landings, an EVS image processor and a wide fie ld of run across, C-through helmet mounted display integrated with a head tracker. It likewise consists of a synthetic vision system accomp whatevering the EVS to present a computing machine generated image of the out-the window view in areas that are not covered by the imaging sensors of the EVS. The EVS image processor performs the following 3 functions.\r\nIt compares the image scanned by the ground mapping Radar and the MMW sensor with a database to present a computer generated image of the ground terrain conditions. It is accompanied by a Global Positioning System (GPS) to provide a location map during all phases of flight. The IR imaging sensors provide a thermal image of the absorb line of view of the aircraft. Typical HUD symbology including altitude, air speed, pressure, etc is added without any obscuration of the underlined scene. The SV imagery provides a three dimensional view of a clear window site with audience to the stored on board database.\r\nnumberure 2 giv es the Boeing’s EVS/SV integrated system. The projection of SV data should be support by the EVS data so that the images register accurately. The system provides for three basic views i. e. , flight to view or the normal view, the map views at different altitudes or localises and the orbiting view or an exocentric/ownership from any orbiting location from the vehicle (Jennings, Alter, Barrow, Bernier and Guell, 2003). (Source: Jennings et al, 2003) EVS Image affect and desegregation Association Engine snuggle\r\nThis is a anxious net inspired self organizing associating memory approach that can be implemented in FPGA ground boards of moderate cost. It constitutes a very efficient performance of best match association at high real time telly rates. It is highly half-hardy in the face of noisy and obscured image inputs. This means of image representation emulates the human visual pathway. A preprocessor performs the feature extraction of edges as well as potentially higher levels of abstraction in raise to generate a large, sparse and random binary vector for each image frame.\r\nThe features are created by looking for 0 crossings after clicking with a laplacian of guassian filter and thereby finding edges. Each edge image is then thresholded by taking the K strongest features position those to 1 and all others to 0. For multiple images, the feature vectors are strung together to create a complicated vector. The operations are performed over a range of multi colonisation hyper pixels including those for 3-D images. FPGA provides a complete solution by offering the necessary memory bandwidth, significant parallelism and low precision tolerance.\r\nFigure 3 provides an illustration of an association engine operation (Kerr et al, 2003). Fig 3: Association Engine Operation (Source Kerr et al, 2003) DSP Approach One approach to perform multi sensor image enhancement and unification is the Retinex algorithmic program evolved at the NASA Langle y query center. Digital signal processors from Texas instruments have been used to successfully implement a real-time version of Retinex. C6711, C6713 and DM642 are some of the commercial digital signal processors (DSP) used for image processing.\r\nImage processing which is a subset of digital signal processing enables fusion of images from heterogeneous sensors to aid in efficient navigation. Figure 4: EVS Image Processing (Source: Hines et al, 2005) Image processing architecture and functions of EVS, Long Wave Infrared (LWIR) and Short Wave Infrared (SWIR) processing can be done simultaneously. The multi ghostly data streams are registered to remove field of view and spatial resolution differences between the cameras and to correct inaccuracies.\r\nreadjustment of Long Wave IR data to the Short Wave IR is performed by selecting SWIR as the base line and applying affine transform to the LWIR imagery. LaRC patented Retinex algorithm is used to enhance the information content of the captured imagery particularly during poor visibility conditions. The Retinex can as well be used as a fusion engine since the algorithm performs nearly symmetrically processing on multi-spectral data and applies multiple scaling operations on each spectral band.\r\nThe fused video stream contains more information than the individual spectral bands and provides the pilot a single output which can be interpreted easily. Figure 4 illustrates the various processing stages in fusing a multi spectral image (Hines et al, 2005). Design Tradeoffs LWIR based single image system is no panacea for fog, only reduces hardware requirements. It is also a low cost solution with lower resolution. An image fusion system provides active penetration of fog and better resolution but comes at a higher cost.\r\nIncreasing the bandwidth provides better coat and angular resolution and satisfactory atmospheric transmission system but costs high. Basic diffraction physics limits the true angular reso lution but can be overcome by providing sufficient over sampling. predisposition vs. update rate and physical size vs. resolution have traditionally been issues with passive cameras. Fortunately, dual mode sensors overcome these trade offs (Kerr et all, 2003). A successful image capture of landing scenario is given in fingers breadth 5. Figure 5.\r\nEVS view Vs. Pilots view (source: Yerex, 2006) Human Factors unconditional the aircraft during the entire period of flight is the sole certificate of indebtedness of the pilot. The pilot seeks guidance from the co-pilot, control tower and inherent EVS to successfully steer the aircraft. The pilot controls the aircraft based on a representation of the innovation displayed in the cockpit given by the inbuilt systems and may not see the actual out-the-window visual scene. Visual information is presented but may not otherwise be visible.\r\n most of the information may be lost due to limitations of resolution, field of view or spectral sensitivities. Therefore, with EVS, the world is not viewed directly but as a representation through sensors and computerized databases. More importantly, the essential data for pilotage should be available on the display. though EVS systems gives a representation of the exact view of the flight environment, its accuracy plays a significant role in flight safety. Thus human factor are vital for flight control.\r\n'

No comments:

Post a Comment