मैं इसे तीन चरणों में उत्तर देने का प्रयास करता हूं: सबसे पहले, आइए हम पहचानते हैं कि एक निर्बाध चिकनी से हमारा क्या मतलब है। अगला, हम एक बहुभिन्नरूपी चिकनी (विशेष रूप से, दो चर की एक चिकनी) का वर्णन करेंगे। अंत में, मैं एक टेंसर उत्पाद को सुचारू रूप से वर्णन करने का अपना सर्वश्रेष्ठ प्रयास करूँगा।
1) Univariate चिकनी है
मान लें कि हमारे पास कुछ प्रतिक्रिया डेटा जिसे हम अनुमान लगाते हैं एक भविष्य कहनेवाला चर x का अज्ञात फ़ंक्शन f है और कुछ त्रुटि response है । मॉडल होगा:yfxε
y=f(x)+ε
अब, इस मॉडल को फिट करने के लिए, हमें के कार्यात्मक रूप की पहचान करनी होगी । जिस तरह से हम यह करने के आधार काम करता है, जो आदेश समारोह का प्रतिनिधित्व करने में superposed कर रहे हैं की पहचान करके है च अपनी संपूर्णता में। एक बहुत ही सरल उदाहरण एक रेखीय प्रतिगमन है, जिसमें आधार फ़ंक्शन सिर्फ β 2 x और β 1 , इंटरसेप्ट हैं। आधार विस्तार को लागू करना, हमारे पास हैffβ2xβ1
y=β1+β2x+ε
In matrix form, we would have:
Y=Xβ+ε
Where Y is an n-by-1 column vector, X is an n-by-2 model matrix, β is a 2-by-1 column vector of model coefficients, and ε is an n-by-1 column vector of errors. X has two columns because there are two terms in our basis expansion: the linear term and the intercept.
The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable x. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).
To be explicit, a basis expansion of dimension i−2 could look like:
y=β1+β2x+β3f1(x)+β4f2(x)+...+βifi−2(x)+ε
where each function f is, perhaps, a cubic function of the independent variable x.
Y=Xβ+εXβ.
This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:
β=(XTX)−1XTY
we have:
β=(XTX+λS)−1XTY
where S is a quadratic i-by-i penalty matrix and λ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix S.
MGCV can use various means of estimating the optimal smoothing parameter λ. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.
2) Multivariate smooth
The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response y as a function f of predictors x and z. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:
y=f(x,z)+ε
Now, it should be intuitively obvious that we are going to represent f(x,z) with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of f(x) above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both x and z (if this was not the case, then implicitly f would be separable such that f(x,z)=fx(x)+fz(z)). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension i−3 could look something like:
y=β1+β2x+β3z+β4f1(x,z)+...+βifi−3(x,z)+ε
I think it's pretty clear that we can still represent this in matrix form with:
Y=Xβ+ε
by simply evaluating each basis function at every unique combination of x and z. The solution is still:
β=(XTX)−1XTY
Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix S and use the same method to get the optimal value of smoothing parameter λ, and given that smoothing parameter, the vector of coefficients is still:
β=(XTX+λS)−1XTY
Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of λ applies in both directions. This works fine when both x and z are on approximately the same scale, such as a spatial application. But what if we replace spatial variable z with temporal variable t? The units of t may be much larger or smaller than the units of x, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure t in nanoseconds and x in light years, the integral of the second derivative with respect to t may be vastly larger than the integral of the second derivative with respect to x, and thus "wiggliness" along the x direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.
It is worth noting that we did not decompose the basis functions into marginal bases of x and z. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.
3) Tensor product smooths
Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response y that is a function f of spatial variable x and temporal variable t. Our model is then:
y=f(x,t)+ε
What we'd like to do is construct a two-dimensional basis for the variables x and t. This will be a lot easier if we can represent f as:
f(x,t)=fx(x)ft(t)
In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of x and t (imagine a two-dimensional "lattice" defined by the locations of knots on the x and t axes) such that the "true" function f is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function f(x,t) may be approximated by the product of simpler functions fx(x) and ft(t) on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!
Our basis expansion, given an i-dimensional basis in x and j-dimensional basis in t, would then look like:
y=β1+β2x+β3fx1(x)+β4fx2(x)+...+βifx(i−3)(x)+βi+1t+βi+2tx+βi+3tfx1(x)+βi+4tfx2(x)+...+β2itfx(i−3)(x)+β2i+1ft1(t)+β2i+2ft1(t)x+β2i+3ft1(t)fx1(x)+βi+4ft1(t)fx2(x)+...+β2ift1(t)fx(i−3)(x)+…+βijft(j−3)(t)fx(i−3)(x)+ε
Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in x and t, thereby constructing n-by-i and n-by-j model matrices X and T, respectively. We could then compute the n2-by-ij tensor product X⊗T of these two model matrices and reorganize into columns, such that each column represented a unique combination ij. Recall that the marginal model matrices had i and j columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension ij, and therefore the same number of columns in its model matrix.
NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.
In reality, this process results in an overall basis expansion of dimension ij−i−j+1 because the full multiplication includes multiplying every t basis function by the x-intercept βx1 (so we subtract j) as well as multiplying every x basis function by the t-intercept βt1 (so we subtract i), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.
So we can represent this as:
y=β1+β2x+β3t+β4f1(x,t)+β5f2(x,t)+...+βij−i−j+1fij−i−j−2(x,t)+ε
Where each of the multivariate basis functions f is the product of a pair of marginal x and t basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:
Y=Xβ+ε
Which (still) has the solution:
β=(XTX)−1XTY
Where the model matrix X has ij−i−j+1 columns. As for the penalty matrices Jx and Jt, these are are constructed separately for each independent variable as follows:
Jx=βTIj⊗Sxβ
and,
Jt=βTSt⊗Iiβ
This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of x are added up at each knot on the t axis, and vice versa). The smoothing parameters λx and λt may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.
I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.