# A Quantitative Model To Evaluate Wrist-Rotation In Golf (P4)

**Quantitative Model**

Our technique of quality analysis takes advantage of linear projection methods traditionally used in the ﬁeld of signal processing and pattern recognition. The intuition behind this model is that the quality of the golf swing with respect to a speciﬁc criterion is proportional to the quality of physical movement. That is, the deviation of the swing should be linearly related to our quality metric. When assessing the system with respect to special criterion, e.g. wrist rotation, the degree of improperness can be exclusively quantiﬁed by a subset of features. Our technique aims to ﬁnd features that are unique for each particular target quality metric and maintain linearity of that metric. Inspired bylinearmethodsof LDA and PCA, we build aquality measure for every given criterion by further processing of features as illustrated in Fig. 6.

The set of features extracted from observations across the network are fed to the data fusion block to form a higher dimensional feature space. Let F1, F2, …,Fn be feature vectors of size N×m obtained from sensor nodes{1 2,…,n} where N denotes the number of observations and m represents the number of features. The new feature vector F has a size of N×M where M = n×m.

PCA, known as an effective dimension reduction techniques, aims to replace the original features with a new set of variables that can be ranked in the order of their importance. The ﬁrst few principal components account for those projections of the feature space that provide most of the information in the data. This technique is widely used for dimension reduction where a high-dimension dataset is replaced with a new dataset with fewer features. The resulting projections are given by C = [C1,…,CL] where each new feature Ci , called a principal component, can be expressed by a linear combination of original features [f1,…,fM]:

where a[ij] are determined by eigenvalue decomposition on the original feature space.

LDA, used for both classiﬁcation and dimension reduction, is characterized as trace optimization on scatter matrices. The technique aims to maximize the between-class scatter while minimizing within-class scatter. It selects the feature vectors given by

where Sb denotesthebetween-class scatter matrix and Sw represents within-class scatter matrix. Classical LDA suffers from Small Sample Size (SSS) problem, that results in singularity of the within-class scatter matrix. One way to over come the singularity of Sw is to use PCA to reduce the dimension of the original dataset before applying LDA. The technique is known assubspaceLDA.Weusethismethodtoreﬁnethe feature space prior to using LDA. We set the number of principal components, L, fed to the LDA block to be equal to the rank of the between-class scatter matrix Sw.

Let X be a given dataset of size N ×M where N is the number of observations and M is the number of features. For every criterion for which our system tends to build a qualitative model, we assume that the data setisdivided into k groups g1,g2,…,gk each accounts for aparticular degree of quality with respect to the given type of bad swing. The reduced feature space C which has a size of N ×L is applied to the LDA module to obtain k − 1 projections. The projections D = [D1,…,Dk−1] from LDA, also called discriminant functions, give directions that maximize the distance between different groups and minimize distances between trials within each group.

Although the ﬁrst projection obtained from LDA provides maximum discrimination, the groups may partially overlap if only this projection is considered as our evaluation metric. To take maximum discrimination into consideration, we build a regression model accordingt o the LDA projections.This model is given by (4).

where the dependent variable yj is a linear combination of parameters βij and αij, and dependent variables Dij refer to the discriminant function Dj associated with i−th observation.

The qualitative model can be tested by computing various statistics that measure the difference between the predicted values, _ yi, and the expected values, yi. The Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) are among most common statistics used to evaluate the overall quality of a regression model. RMSE is the square root of the average squared distance of data point from the ﬁtted line and is given by

where N denotes cardinality of the validation set. MAE is the average of the absolute value of the residual sand is given by