**1. Introduction**

Here attitude refer to either pitch or roll angle of a body with respect to an inertia coordinate system fixed on the Earth. We will just restrict our attention to just using a pair of single axis accelerometer and gyroscope to estimate the pitch angle of a body. Such system is used for instance to estimate the orientation of an aircraft, drone or self-balancing robot. The accelerometer and gyroscope are usually MEMS (Micro-Electronics and Mechanical System) based chips, for instance the popular MPU6000 series from Invensense (Now part of TDK Corp as of May 2018) which combine a 3-axis accelerometer and a 3-axis gyroscope into a single chip, the ADXL335 accelerometer from Analog Devices, the LY3100A gyroscope from ST Microelectronics etc. These sensors can be used to estimate the 3D orientation, i.e. the roll, pitch and yaw angles of a rigid body. Since I am going to use the accelerometer and gyroscope on a two-wheels self-balancing robot, only the pitch angle is of importance in keeping the robot balance, and outputs from one axis of the accelerometer and gyroscope will be used to estimate the pitch angle of the robot platform (see Fig 1). I prefer to call the pitch angle as the "Tilt" angle. This is the term I am going to use from now on.###
**2. Measurement of Tilt Angle with MEMS Accelerometer and Gyroscope**

In
principle we can use either accelerometer or gyroscope to estimate the tilt
angle of the robot. Gyroscope
measures angular velocity, thus
we must integrate the
output with respect to time to obtain the
tilt angle. For a typical digital output MEMS sensor the output data is in integer format, and for analog output MEMS sensor it is a voltage. Thus we
must know how to interface with the digital MEMS sensor or converts the voltage outputs of the analog MEMS accelerometer and
gyroscope to angle and angular velocity. There are already many online resources that describe the usage of these MEMS sensors hence it is assumed that the reader is
familiar with this. Consider the self-balancing robot in Fig 1.

** **

** **

** **

** **

** **

** **

** **

** **

** **

**Figure 1.**

The real accelerometer and gyroscope outputs are shown in Fig 2. In reality the accelerometer and gyroscope outputs are corrupted by systematic and random noise, which we lump into the terms *eA* and *e**G*.

**Figure 2.**

To obtain the tilt
angle estimate from the gyroscope,
the angular velocity needs to be integrated, this is shown in Fig 3.

** **

** **

** **

** **

**Figure 3.**

**Figure 4**- System setup to measure actual tilt angle of a two-wheels self-balancing robot.

A comparison of actual tilt angle with estimates from either accelerometer and gyroscope for a duration of 10 seconds is shown in Fig 5. Here I sample the outputs of the accelerometer, gyroscope and potentiometer at 1000 samples per second. Take note of the direction time axis! (this is due to a bug in the graph plotting software that I have written at that time, in 2013). Notice the vulnerability of the accelerometer estimate to vibration when I rock the robot platform back-and-forth rapidly. Also note the drift in the gyroscope estimate after around 5 seconds.

**Figure 5**- Measured and estimated tilt angles comparison.

###
**3. The Complementary Filter Approach to Sensor Fusion**

One way to improve the estimate of the tilt angle is to combine the outputs of both accelerometer and gyroscope, this is called "Sensor Fusion". Another popular approach is based on Kalman Filter. This method considers the accelerometer and gyroscope outputs as the estimated "state" of the system, corrupted by errors or noise. Based on current and prior estimated state values, and also some assumptions on the noise characteristics, we can estimate the actual value of current state. The focus here is the sensor fusion approach as it is less computationally intensive and can be easily implemented in a low-cost micro-controller.Referring to Fig 5 again. Observe that the error from the gyroscope is mainly low frequency. We can imagine a slow varying error super-imposed on the gyroscope estimate in Fig 5. Where as for the accelerometer estimate, the error contribution can be considered high frequency. Note that when I rock the robot platform slowly the accelerometer estimate matches the actual tilt angle. The estimate deviates from the actual tilt angle when the the robot platform is rocked rapidly.

Thus we can use the accelerometer estimate when the body movement is slow (e.g. low-frequency), and use the gyroscope estimate when the body movement is fast (e.g. higher frequency). This needs to be done automatically. A straight forward way to achieve this is to apply a high-pass filter (HPF) to the gyroscope estimate, and a low-pass filter (LPF) to the accelerometer estimate, then combine both estimates as depicted in Fig 6. This combination of estimates leads to the name Sensor Fusion. In order to match the actual tilt angle, the cut-off frequency of both LPF and HPF must be the same. Because the LPF and HPF are complementary of each other in the frequency domain, hence this is called complementary filters sensor fusion.

**Figure 6**- Implementation of complementary filter using first order low-pass and high-pass filters.

I am assuming that the reader is familiar with continuous time linear signal analysis. In Fig 6, we are using Laplace Transform to represent the operation on the signals (e.g. the angle and angular velocity) in the frequency domain, where

*s*is the complex frequency variable. The signals are assumed to be continuous time. The parameter

*a*in the transfer functions for LPF and HPF determines the cut-off frequency of the filters. In fact the cut-off frequency is given by

*1/a*radian/sec. This can be easily converted to cut-off frequency in Hz. Lastly note that the LPF and HPF in Fig 6 are first order filters, as the denominator of the filter transfer function contains a polynomial in

*s*of first order.

For typical MEMS sensors, the cut-off frequency is usually in the vicinity of 2 to 10 Hz. As an example, suppose we wish to implement a pair of complementary filters with cut-off frequency of 4 Hz. The calculation and frequency domain plots in Fig 7 illustrates the transfer functions of both filters with 4 Hz cut-off.

**Figure 7**- LPF and HPF magnitude plots with 4 Hz cut-off.

###
**4. Digital or Discrete Time Implementation**

### Now we would like to convert the operation for estimating the tilt angle using complementary sensor fusion in Fig 6 in discrete time, as this operation is performed in a digital computer The procedures begin like this:

### 1) First take the complementary sensor fusion equation in Fig 6.

###
2) Since it is in frequency domain, we convert it into time domain by using the relationship between multiplication with *s* and differentiation in time domain (see texts on Laplace Transform). We obtain a differential equation from this step.

###
3) Then we approximate the differentiation operation in time with finite difference. Here we use backward finite difference to get a stable finite difference equation. A time interval of *T* is assumed.

### 4) Finally we tidy up the finite difference equation to obtain the functional form which can be easily implemented in computer codes.

*Update Equation*.

** **

**Figure 8 **- Derivation of the discrete time update equation for digital computer implementation.

As a final example** **of using the above concepts, consider the situation shown in Fig 9 where we have a system with sampling interval of 2 milliseconds, i.e. every 2 milliseconds the controller reads the values of the accelerometer and gyroscope and perform the tilt angle estimation.

**Figure 9**- Calculation example.

One important observation is that the coefficients of the update equation above must sum to 1.

###
**5. Practical Considerations**

As the example in Fig 9 shows, technically we start by determining the cut-off frequency for the complementary LPF and HPF, based on physically consideration of the actual sensors (like the 3dB bandwidth of the accelerometer and gyroscope stated in the datasheet, any external RC filters we may add to the sensor output etc.). From this we then calculate the coefficients of the update equation. More often than not we will just varies the coefficient by trial and error, by comparing the estimated tilt angle with actual angle measured using some physical means. For example we can try with the following sequence for the coefficients *b*and 1-

*b*:

*b*= 0.96, 0.97, 0.98, 0.99, 0.995 etc.

1-

*b*= 0.04, 0.03, 0.02, 0.01, 0.005 etc.

Fig 10 shows an example of the estimated tilt angle with two coefficient values. The robot is pushed manually so that it tilts to the back, then to the front. The coefficients have to be tuned until the estimated tilt angle matches the measured tilt angle as close as possible.

** **

** **

**Figure 10 **- Estimated tilt angle from MEMS sensors with different set of coefficient values.

###
**References**

1. A. J. Baerveldt, R. Klang, "A low-cost and low-weight attitude estimation system for autonomous helicoptor", Intelligent Engineering System, 1997.2. S. Colton, "The balance filter - a simple solution for integrating accelerometer and gyroscope measurements for a balancing platform", White paper, MIT, 2007.

3. R. E. Ziemer, W. H. Tranter, D. R. Fannin, "Signals & systems - continuous and discrete", Prentice Hall, 1998 4th edition.