Answering Questions about Imagery and MH370

So first a bit of background on satellite orbits.  If you throw an object, it’s path follows a curve – landing under the influence of gravity some distance away.   But if an object is thrown hard enough, its curve matches the curve of the earth and so it ‘misses’ the earth;  it’s in orbit.  The characteristics of that orbit are based on Newtonian physics – with gravity preventing the object from flying away from the earth; and its speed preventing it from falling to earth.

The orbit of that object (now called a satellite) is entirely independent from the rotation of the earth – the satellite is orbiting and the earth rotates beneath it.  A satellite orbiting close to the earth moves very rapidly over the ground;  a satellite further from the earth takes a more leisurely pace.  The moon is just another but very distant satellite – it takes a month to rotate around the world.

There’s one particular orbit where a satellite takes one day to rotate around the world – and so the satellite effectively hovers over one place on the earth’s surface.   This is known as a geostationary orbit and is incredibly useful for communications satellites such as Inmarsat.  Unfortunately, the key characteristic of a geostationary orbit is that the satellite needs to be in an orbit almost 36,000 kilometres above the surface of the earth.  It’s imprisoned at that altitude by the laws of physics.

This doesn’t matter for communications satellites – you just put a more sensitive antennae on board.  It really matters for imagery though.  Imagery satellites are typically trying to achieve the best possible resolution – and 36,000 kilometres is simply too high to be useful (except for weather monitoring).   Imagery satellites need to strike a balance between being as low as possible whilst maintaining a long life in orbit (which requires keeping high enough to avoid atmospheric drag).  So most imagery satellites are in much lower orbits – typically in the range 600-1,000 kilometres.

So there is simply no way for a high resolution imagery satellite to ‘hover’ over a point on the earth’s surface.  They’re whizzing around the earth every 80-100 minutes but only being over a particular point on the earth’s surface every ten days or so.  The only way of increasing the revisit frequency (looking straight down) is to add more satellites to create a constellation.

That segues on to another question;  why can’t imagery satellites be moved to take pictures ‘on demand’. We’d used the term ‘imprisoned’ above… and it’s those pesky laws of physics again.  You can think of a satellite rather like a train on tracks.  To take the train to a new place involves the tremendous cost of moving the tracks.   A satellite’s orbit can only be changed by applying energy in the form of small rockets that can adjust the orbit.   An imagery satellite only carries enough manoeuvring fuel to make small adjustments to its orbit – primarily to maintain altitude.   There’s nothing spare to steer a satellite to a new tasking.

The only movement that’s possible is to rotate the satellite to obtain images looking forward, back and side-to-side.  Modern imagery satellites are very agile and can greatly increase their revisit frequency… but at a cost.  The best resolution is obtained looking straight down…  ‘on nadir’ is the technical term.   As soon as the satellite is rotated ‘off nadir’, the resolution is impacted.   Basic geometry tells you that a 1m pixel looking straight down becomes a 1.6m pixel looking at a slant angle of 45%.

The notion of rotating the satellite also presupposes that it knows where to look!  Which brings us back to the core challenge in the hunt for MH370…  the search area remains huge.   The focus at present is maximum coverage with best resolution;  which suggests that on-nadir swaths of imagery are the optimum solution.

There’s also been a lot of coverage in the media of searching the sea bed by sonar.   Sonar is used to produce bathymetric data (i.e. detailed depth information) by bouncing sound waves off the sea bed.  In that respect, sonar is just another form of imagery (it’s actually directly analogous to radar-based imagery) and carries with it all the limiting characteristics that have been discussed in these blog posts except that of weather.

In particular, the size of the sonar swath is tiny compared to the search area and identifying the specific characteristics of aircraft debris demands human intervention.   At the depth in the search area, the ability to distinguish different sea bed characteristics is very limited, as is the resolution of the bathymetry.

It’s worth emphasizing that bathymetric sonar systems cannot detect the ‘pings’ from the flight data recorder or the cockpit voice recorder;  very specialist kit is needed to listen out for those pings.  A dramatic narrowing of the search area is needed before this kit can be deployed.  Time is fast running out as the pinger batteries expire after 30 days.

Again, we hope that this helps alleviate some of the frustration by explaining the nature of the spatial technology challenges.