## Abstract

Two extrapolation techniques for recursive digital filtering are
presented and compared with common padding methods such as linear and
reflection (reverse mirror) extrapolation. The case in which the
endpoints of position data lead to peak accelerations after filtering
and differentiation is examined. The first technique, "least
squares", is based on fitting a third degree polynomial to the final
ten data points in both to the forward and backward directions and
extending the signal by 20 data points using the polynomial
coefficients. The second technique, "prediction", is based on a
linear autoregressive model with 20 coefficients, which is applied in
both directions and the signal is extrapolated by 20 points. The
lowest cumulative error of the endpoint accelerations (22.8 rad s-2)
represented just one third of the error when the common padding
methods were used in optimal digital filtering (69.7 rad s-2). It
also represented approximately half the lowest cumulative error in
optimal smoothing with quintic splines (48.0 rad s-2).