GAM में टेंसर उत्पाद इंटरैक्शन के पीछे अंतर्ज्ञान (R में MGCV पैकेज)


30

सामान्यीकृत योज्य मॉडल वे हैं जहां उदाहरण के लिए

y=α+f1(x1)+f2(x2)+ei
। कार्य सुचारू हैं, और अनुमान लगाया जाना है। आमतौर पर दंडित बंटवारे से। एमजीसीवी आर में एक पैकेज है जो ऐसा करता है, और लेखक (साइमन वुड) आर उदाहरण के साथ अपने पैकेज के बारे में एक किताब लिखता है। रूपर्ट, एट अल। (2003) एक ही चीज़ के सरल संस्करणों के बारे में कहीं अधिक सुलभ पुस्तक लिखें।

मेरा प्रश्न इस प्रकार के मॉडलों के भीतर परस्पर क्रिया के बारे में है। क्या होगा यदि मैं जैसे निम्नलिखित कुछ करना चाहता हूँ:

y=α+f1(x1)+f2(x2)+f3(x1×x2)+ei
अगर हम OLS भूमि में थे (जहां f सिर्फ एक बीटा है), मैं व्याख्या के साथ कोई समस्या नहीं होगा 3 । यदि हम दंडित विभाजन के माध्यम से अनुमान लगाते हैं, तो मुझे योगात्मक संदर्भ में व्याख्या के साथ भी कोई समस्या नहीं है। f^3

लेकिन GAM में MGCV पैकेज में "टेनसर प्रोडक्ट स्मूथ" नामक ये चीजें हैं। मैं Google "टेंसर उत्पाद" और मेरी आँखें तुरंत स्पष्टीकरण को पढ़ने की कोशिश कर रही हैं, जो मुझे लगता है। या तो मैं काफी स्मार्ट नहीं हूं या गणित को बहुत अच्छी तरह से समझाया नहीं गया है, या दोनों।

कोडिंग के बजाय

normal = gam(y~s(x1)+s(x2)+s(x1*x2))

एक टेंसर उत्पाद द्वारा ही (?) बात की जाएगी

what = gam(y~te(x1,x2))

जब मैं करता हूं

plot(what)

या

vis.gam(what)

मुझे वास्तव में कुछ अच्छा आउटपुट मिलता है। लेकिन मुझे नहीं पता कि ब्लैक बॉक्स के अंदर क्या चल रहा है te()और न ही उपरोक्त कूल आउटपुट की व्याख्या कैसे की जा सकती है। बस दूसरी रात मेरे पास एक बुरा सपना था कि मैं एक संगोष्ठी दे रहा था। मैंने सभी को एक अच्छा ग्राफ दिखाया, उन्होंने मुझसे पूछा कि इसका क्या मतलब है, और मुझे नहीं पता था। तब मुझे पता चला कि मेरे पास कोई कपड़े नहीं थे।

क्या कोई भी मेरी मदद कर सकता है, और पोस्टीरिटी, थोड़ा यांत्रिकी और अंतर्ज्ञान देकर जो यहां हुड के नीचे चल रहा है? आदर्श रूप से सामान्य एडिटिव इंटरैक्शन केस और टेंसर केस के बीच अंतर के बारे में थोड़ा सा कहकर? गणित में आगे बढ़ने से पहले सरल अंग्रेजी में सब कुछ कहने के लिए बोनस अंक।


सरल उदाहरण, पैकेज लेखक की पुस्तक से लिया गया: लाइब्रेरी (mgcv) डेटा (पेड़) ct5 <- gam (वॉल्यूम ~ ते (ऊंचाई, गिर्थ, k = 5), परिवार = गामा (लिंक = लॉग), डेटा = पेड़ = ct5 vis.gam (ct5) प्लॉट (ct5, too.far = 0.15)
generic_user

जवाबों:


30

मैं इसे तीन चरणों में उत्तर देने का प्रयास करता हूं: सबसे पहले, आइए हम पहचानते हैं कि एक निर्बाध चिकनी से हमारा क्या मतलब है। अगला, हम एक बहुभिन्नरूपी चिकनी (विशेष रूप से, दो चर की एक चिकनी) का वर्णन करेंगे। अंत में, मैं एक टेंसर उत्पाद को सुचारू रूप से वर्णन करने का अपना सर्वश्रेष्ठ प्रयास करूँगा।

1) Univariate चिकनी है

मान लें कि हमारे पास कुछ प्रतिक्रिया डेटा जिसे हम अनुमान लगाते हैं एक भविष्य कहनेवाला चर x का अज्ञात फ़ंक्शन f है और कुछ त्रुटि response है । मॉडल होगा:yfxε

y=f(x)+ε

अब, इस मॉडल को फिट करने के लिए, हमें के कार्यात्मक रूप की पहचान करनी होगी । जिस तरह से हम यह करने के आधार काम करता है, जो आदेश समारोह का प्रतिनिधित्व करने में superposed कर रहे हैं की पहचान करके है अपनी संपूर्णता में। एक बहुत ही सरल उदाहरण एक रेखीय प्रतिगमन है, जिसमें आधार फ़ंक्शन सिर्फ β 2 x और β 1 , इंटरसेप्ट हैं। आधार विस्तार को लागू करना, हमारे पास हैffβ2xβ1

y=β1+β2x+ε

In matrix form, we would have:

Y=Xβ+ε

Where Y is an n-by-1 column vector, X is an n-by-2 model matrix, β is a 2-by-1 column vector of model coefficients, and ε is an n-by-1 column vector of errors. X has two columns because there are two terms in our basis expansion: the linear term and the intercept.

The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable x. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).

To be explicit, a basis expansion of dimension i2 could look like:

y=β1+β2x+β3f1(x)+β4f2(x)+...+βifi2(x)+ε

where each function f is, perhaps, a cubic function of the independent variable x.

Y=Xβ+εXβ.

This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:

β=(XTX)1XTY

we have:

β=(XTX+λS)1XTY

where S is a quadratic i-by-i penalty matrix and λ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix S.

MGCV can use various means of estimating the optimal smoothing parameter λ. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.

2) Multivariate smooth

The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response y as a function f of predictors x and z. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:

y=f(x,z)+ε

Now, it should be intuitively obvious that we are going to represent f(x,z) with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of f(x) above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both x and z (if this was not the case, then implicitly f would be separable such that f(x,z)=fx(x)+fz(z)). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension i3 could look something like:

y=β1+β2x+β3z+β4f1(x,z)+...+βifi3(x,z)+ε

I think it's pretty clear that we can still represent this in matrix form with:

Y=Xβ+ε

by simply evaluating each basis function at every unique combination of x and z. The solution is still:

β=(XTX)1XTY

Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix S and use the same method to get the optimal value of smoothing parameter λ, and given that smoothing parameter, the vector of coefficients is still:

β=(XTX+λS)1XTY

Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of λ applies in both directions. This works fine when both x and z are on approximately the same scale, such as a spatial application. But what if we replace spatial variable z with temporal variable t? The units of t may be much larger or smaller than the units of x, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure t in nanoseconds and x in light years, the integral of the second derivative with respect to t may be vastly larger than the integral of the second derivative with respect to x, and thus "wiggliness" along the x direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.

It is worth noting that we did not decompose the basis functions into marginal bases of x and z. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.

3) Tensor product smooths

Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response y that is a function f of spatial variable x and temporal variable t. Our model is then:

y=f(x,t)+ε

What we'd like to do is construct a two-dimensional basis for the variables x and t. This will be a lot easier if we can represent f as:

f(x,t)=fx(x)ft(t)

In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of x and t (imagine a two-dimensional "lattice" defined by the locations of knots on the x and t axes) such that the "true" function f is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function f(x,t) may be approximated by the product of simpler functions fx(x) and ft(t) on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!

Our basis expansion, given an i-dimensional basis in x and j-dimensional basis in t, would then look like:

y=β1+β2x+β3fx1(x)+β4fx2(x)+...+βifx(i3)(x)+βi+1t+βi+2tx+βi+3tfx1(x)+βi+4tfx2(x)+...+β2itfx(i3)(x)+β2i+1ft1(t)+β2i+2ft1(t)x+β2i+3ft1(t)fx1(x)+βi+4ft1(t)fx2(x)+...+β2ift1(t)fx(i3)(x)++βijft(j3)(t)fx(i3)(x)+ε

Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in x and t, thereby constructing n-by-i and n-by-j model matrices X and T, respectively. We could then compute the n2-by-ij tensor product XT of these two model matrices and reorganize into columns, such that each column represented a unique combination ij. Recall that the marginal model matrices had i and j columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension ij, and therefore the same number of columns in its model matrix.

NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.

In reality, this process results in an overall basis expansion of dimension ijij+1 because the full multiplication includes multiplying every t basis function by the x-intercept βx1 (so we subtract j) as well as multiplying every x basis function by the t-intercept βt1 (so we subtract i), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.

So we can represent this as:

y=β1+β2x+β3t+β4f1(x,t)+β5f2(x,t)+...+βijij+1fijij2(x,t)+ε

Where each of the multivariate basis functions f is the product of a pair of marginal x and t basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:

Y=Xβ+ε

Which (still) has the solution:

β=(XTX)1XTY

Where the model matrix X has ijij+1 columns. As for the penalty matrices Jx and Jt, these are are constructed separately for each independent variable as follows:

Jx=βTIjSxβ

and,

Jt=βTStIiβ

This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of x are added up at each knot on the t axis, and vice versa). The smoothing parameters λx and λt may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.

I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.


Nice answer. I've since learned quite a lot more than I knew three years ago. But I'm not sure that I would have understood 3 years ago what you wrote today. Or maybe I would have. I think the place to start is to think of a basis expansion in many dimensions as a "net" across the variable space. I suppose tensors can be described as a net with rectangular patterns... And maybe different "shear" forces pulling from each direction.
generic_user

On another note, I would caution you against thinking of the tensor product as representing something spatial. This is because the actual tensor product of marginal x and t basis functions will include tons of zeros which represent the evaluation of basis functions outside of their defined range. The actual tensor product will usually be very sparse.
Josh

1
Thanks for this great summary! Just one remark: The equation after "Our basis expansion," is not completely correct. It does give the correct basis functions, but it gives a parametrization where the corresponding parameters are of product form (βxiβtj).
jarauh

1
@Josh Ok, I tried. It's not easy to have it correct and easy to understand at the same time (and to follow someone else's notation). By the way, the link to smooth-toolbox.pdf seems to be broken.
jarauh

1
Looks good. Apparently your edit was rejected, but I overrode the rejection and approved it. When I started writing this answer I didn't realize just how confusing the expansions would look. I should probably go back and rewrite it with pi (product) notation one of these days.
Josh
हमारी साइट का प्रयोग करके, आप स्वीकार करते हैं कि आपने हमारी Cookie Policy और निजता नीति को पढ़ और समझा लिया है।
Licensed under cc by-sa 3.0 with attribution required.