# Nonlinear Models - csd.uwo.ca dlizotte/teaching/cs4414_F18/Lectures/8_Nonlinear...¢ ...

date post

04-Nov-2019Category

## Documents

view

2download

0

Embed Size (px)

### Transcript of Nonlinear Models - csd.uwo.ca dlizotte/teaching/cs4414_F18/Lectures/8_Nonlinear...¢ ...

Nonlinear Models Dan Lizotte 2018-10-18

Nonlinearly separable data

• A linear boundary might be too simple to capture the class structure.

• One way of getting a nonlinear decision boundary in the input space is to find a linear decision boundary in an expanded space (similar to polynomial regression.)

• Thus, xi is replaced by ϕ(xi), where ϕ is called a feature mapping

Separability by adding features

−2 −1 0 1 2 −1

−0.5

0

0.5

1

x 1

−2 −1 0 1 2

0

0.5

1

1.5

2

2.5

x 1

x 2 =

x 12

1

Separability by adding features

−2 −1 0 1 2 −1

−0.5

0

0.5

1

x

(w 0 +w(1) x+w(2) x2)

−2 −1 0 1 2

0

0.5

1

1.5

2

2.5

x 1

x 2 =

x 12

Separability by adding features

−2 −1 0 1 2 −1

−0.5

0

0.5

1

x

(w 0 +w(1) x+w(2) x2)

−2 −1 0 1 2

0

0.5

1

1.5

2

2.5

x

x 2 =

x 12

b(1)+b(2) x

more flexible decision boundary ≈ enriched feature space

Margin optimization in feature space

• Replacing xi with ϕ(xi), the dual form becomes:

2

max ∑n

i=1 αi − 1 2 ∑n

i,j=1 yiyjαiαj(ϕ(xi) · ϕ(xj)) w.r.t. αi

s.t. 0 ≤ αi ≤ C and ∑n

i=1 αiyi = 0

• Classification of an input x is given by:

hw,w0(x) = sign (

n∑ i=1

αiyi(ϕ(xi) · ϕ(x)) + w0

)

• Note that in the dual form, to do both SVM training and prediction, we only ever need to compute dot-products of feature vectors.

Kernel functions

• Whenever a learning algorithm (such as SVMs) can be written in terms of dot-products, it can be generalized to kernels.

• A kernel is any function K : Rn × Rn 7→ R which corresponds to a dot product for some feature mapping ϕ:

K(x1, x2) = ϕ(x1) · ϕ(x2) for some ϕ.

• Conversely, by choosing feature mapping ϕ, we implicitly choose a kernel function

• Recall that ϕ(x1) · ϕ(x2) ∝ cos∠(ϕ(x1), ϕ(x2)) where ∠ denotes the angle between the vectors, so a kernel function can be thought of as a notion of similarity.

Example: Quadratic kernel

• Let K(x, z) = (x · z)2.

• Is this a kernel?

K(x, z) = (

p∑ i=1

xizi

) p∑ j=1

xjzj

= ∑ i,j∈{1...p}

(xixj) (zizj)

• Hence, it is a kernel, with feature mapping:

ϕ(x) = ⟨x21, x1x2, . . . , x1xp, x2x1, x22, . . . , x2p⟩

Feature vector includes all squares of elements and all cross terms.

• Note that computing ϕ takes O(p2) but computing K takes only O(p)!

Polynomial kernels

• More generally, K(x, z) = (1 + x · z)d is a kernel, for any positive integer d.

• If we expanded the product above, we get terms for all degrees up to and including d (in xi and zi).

• If we use the primal form of the SVM, each of these will have a weight associated with it!

• Curse of dimensionality: it is very expensive both to optimize and to predict with an SVM in primal form with many features.

3

The “kernel trick”

• If we work with the dual, we do not actually have to ever compute the features using ϕ. We just have to compute the similarity K.

• That is, we can solve the dual for the αi:

max ∑n

i=1 αi − 1 2 ∑n

i,j=1 yiyjαiαjK(xi, xj) w.r.t. αi s.t. 0 ≤ αi ≤ C,

∑n i=1 αiyi = 0

• The class of a new input x is computed as:

hw,w0(x) = sign (

n∑ i=1

αiyiK(xi, x) + w0

)

• Often, K(·, ·) can be evaluated in O(p) time—a big savings!

Some other (fairly generic) kernel functions

• K(x, z) = (1 + x · z)d – feature expansion has all monomial terms of degree ≤ d.

• Radial Basis Function (RBF)/Gaussian kernel (most popular):

K(x, z) = exp(−γ∥x − z∥2)

The kernel has an infinite-dimensional feature expansion, but dot-products can still be computed in O(n)!

• Sigmoid kernel: K(x, z) = tanh(c1x · z + c2)

4

Example: Radial Basis Function (RBF) / Gaussian kernel

Second brush with “feature construction”

• With polynomial regression, we saw how to construct features to increase the size of the hypothesis space

• This gave more flexible regression functions

• Kernels offer a similar function with SVMs: More flexible decision boundaries

• Often not clear what kernel is appropriate for the data at hand; can choose using validation set

SVM Summary

• Linear SVMs find a maximum margin linear separator between the classes • If classes are not linearly separable,

– Use the soft-margin formulation to allow for errors – Use a kernel to find a boundary that is non-linear – Or both (usually both)

• Choosing the soft margin parameter C and choosing the kernel and any kernel parameters must be done using validation (not training)

5

Getting SVMs to work in practice

• libsvm and liblinear are popular

• Scaling the inputs (xi) is very important. (E.g. make all mean zero variance 1.)

• Two important choices:

– Kernel (and kernel parameters, e.g. γ for the RBF kernel)

– Regularization parameter C

• The parameters may interact – best C may depend on γ

• Together, these control overfitting: best to do a within-fold parameter search, using a validation set

• Clues you might be overfitting: Low margin (large weights), Large fraction of instances are support vectors

Kernels beyond SVMs

• Remember, a kernel is a special kind of similarity measure

• A lot of research has to do with defining new kernel functions, suitable to particular tasks / kinds of input objects

• Many kernels are available:

– Information diffusion kernels (Lafferty and Lebanon, 2002)

– Diffusion kernels on graphs (Kondor and Jebara 2003)

– String kernels for text classification (Lodhi et al, 2002)

– String kernels for protein classification (e.g., Leslie et al, 2002)

… and others!

Instance-based learning, Decision Trees

• Non-parametric learning

• k-nearest neighbour

• Efficient implementations

• Variations

Parametric supervised learning

• So far, we have assumed that we have a data set D of labeled examples

• From this, we learn a parameter vector of a fixed size such that some error measure based on the training data is minimized

• These methods are called parametric, and their main goal is to summarize the data using the param- eters

• Parametric methods are typically global, i.e. have one set of parameters for the entire data space

• But what if we just remembered the data?

6

• When new instances arrive, we will compare them with what we know, and determine the answer

Non-parametric (memory-based) learning methods

• Key idea: just store all training examples ⟨xi, yi⟩

• When a query is made, compute the value of the new instance based on the values of the closest (most similar) points

• Requirements:

– A distance function

– How many closest points (neighbors) to look at?

– How do we compute the value of the new point based on the existing values?

Simple idea: Connect the dots!

0.00

0.25

0.50

0.75

1.00

15 20 25

Radius.Mean

bi nO

ut co

m e

Outcome

N

R

7

Simple idea: Connect the dots!

0

20

40

60

80

12 16 20 24

Radius.Mean

T im

e

One-nearest neighbor

• Given: Training data {(xi, yi)}ni=1, distance metric d on X .

• Training: Nothing to do! (just store data)

• Prediction: for x ∈ X

– Find nearest training sample to x.

i∗ ∈ arg min i

d(xi, x)

– Predict y = yi∗ .

What does the approximator look like?

• Nearest-neighbor does not explicitly compute decision boundaries • But the effective decision boundaries are a subset of the Voronoi diagram for the training data

8

Each line segment is equidistant between two points classes.

What kind of distance metric?

• Euclidean distance

• Maximum/minimum difference along any axis

• Weighted Euclidean distance (with weights based on domain knowledge)

d(x, x′) = p∑

j=1 uj(xj − x′j)2

• An arbitrary distance or similarity function d, specific for the application at hand (works best, if you have one)

Distance metric is really important!

!

"#$%&'()*+,+-../0+-..10+234&56+78+9##&5 :3;*?

Distance metric tricks

• You may need to do preprocessing:

– Scale the input dimensions (or normalize them)

– Remove noisy inputs

– Determine weights for attributes based on cross-validation (or information-theoretic methods)

• Distance metric is often domain-specific

– E.g. st

*View more*