Tripling-oriented Doche–Icart–Kohel curve: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Nageh
m categories
 
en>Yobot
m References: WP:CHECKWIKI error fixes / special characters in pagetitle using AWB (9485)
 
Line 1: Line 1:
Hi there! :) My name is Dee, I'm a student studying Africana Studies from Kobenhavn K, Denmark.<br><br>Feel free to surf to my site :: [http://i4games.net/games/profile/likruger Womens mountain bike sizing.]
The '''Randomized weighted majority algorithm''' is an algorithm that belongs to the [[machine learning]] theory. It minimize the mistake-bound of the [[weighted majority algorithm]]. For example, each day we get a prediction from some number of experts about whether the stock market is up or down.Then we use them to make our prediction. Our goal is to do nearly as well as best of the experts in hindsight.
 
== Motivation ==
In [[machine learning]], the [[weighted majority algorithm]](WMA) is a meta-learning algorithm which "predicts from expert advice".
 
'''The algorithm:'''
 
initialize all experts to weight 1.
for each round:
    poll all the experts and predict based on a weighted majority vote of their predictions.
    cut in half the weights of all experts that make a mistake.
 
Suppose there are n experts and the best experts made m mistakes. the [[weighted majority algorithm]] makes at most <math>\ 2.4(log_2n+ m</math>) mistakes.
 
== Randomized weighted majority(RWMN) ==
<math>\ 2.4(log_2n + m</math>) not so good if the best expert makes a mistake 20% of the time. can we do better? yes. we'd like to improve dependence on  <math>\ m</math>.
Instead of predicting based on majority vote, we use the weighted as probabilities - '''Randomized weighted majority'''.
If <math>\ w_i</math> is the weight of expert <math>\ i</math>, <math>\ W=</math><math>\ \sum_iw_i</math>, then we will follow expert <math>\ i</math> with probability <math>\ \frac{w_i}{W}</math>. Our goal is to bound the worst-case expected number of mistakes, assuming that the adversary(the world) has to select one of the answers as correct before we make our coin toss.
Why is this better in the worst case? Idea: worst case for determinisric algorithm([[weighted majority algorithm]]) was when weights split 50/50.But, now it is not so bad since we also have 50/50 chance of getting it right. Also, to trade-off between dependence on <math>\ m</math> and <math>\ log_2n</math>, we will generalize to multiply by <math>\ \beta<1</math>, instead of necessarily <math>\frac{1}{2}</math>.
 
== Analysis ==
At the <math>\ t</math>-th round, define <math>\ F_t</math> to be the fraction of weight on the '''wrong''' answers. so, <math>\ F_t</math> is the probability we make a mistake on the <math>\ t</math>-th round. Let <math>\ M</math> denote the total number of mistakes we made so far. Furthermore, we define <math>E[M]=\ \sum_tF_t</math>, using the fact that expectation is additive. On the <math>\ t</math>-th round, <math>W</math> becomes <math>\ W(1-(1-\beta)F_t)</math>.
Reason: on  <math>\ F_t</math> fraction, we are multiplying by  <math>\ \beta</math>.
So,  <math>\ W_{final}=n*(1-(1-\beta)F_1)*(1-(1-\beta)F_2)...</math><br />
Let's say that  <math>\ m</math> is the number of mistakes of the best expert so far. We can use the inequality  <math>\ W\geq \beta^m</math>. Now we solve. First, take the natural log of both sides. We get:
<math>\ mln\beta \leq ln(n) + \sum_tln(1-(1-\beta)F_t)</math>, Simplify:<br />
<math>\ ln(1-x)= -x -\frac {x^2}{2} - \frac {x^3}{3}-...</math>, So,<br />
<math>\ ln(1-(1-\beta)F_t)< -(1-\beta)F_t</math>.<br />
<math>\ mln\beta \leq ln(n) - (1-\beta)* \sum_tF_t</math><br />
Now, use <math>\ E[M] =\ \sum_tF_t</math>, and the result is:<br />
<math>\ E[M] \leq \frac {mln(1/\beta)+ln(n)}{1-\beta}</math><br />
Let's see if we made any progress:
 
If <math>\ \beta=\frac{1}{2}</math>, we get, <math>\ 1.39m+2ln(n).</math>, <br />
if <math>\ \beta=\frac{3}{4}</math>, we get, <math>\ 1.15m+4ln(n)</math>.  <br />
so we can see we made progress.
Roughly, of the form <math>\ (1+\epsilon)*m+\epsilon^{-1}*ln(n)</math>.
 
== Uses of Randomized weighted Majority(RWMN) ==
Can use to combine multiple algorithms to do nearly as well as best in hindsight.
 
can apply '''Randomized weighted majority algorithm''' in situations where experts are making choices that cannot be combined (or can't be combined easily).For instance, repeated game-playing or online shortest path problem.In the online shortest path problem, each expert is telling you a different way to drive to work. You pick one using '''Randomized weighted majority algorithm'''. Later you find out how well you would have done, and penalize appropriately. To do this right, we want to generalize from just "losS" of 0 to 1 to losses in [0,1]. Goal of having expected loss be not too much worse than loss of best expert.We generalize  by penalize <math>\beta^{loss}</math>, meaning having two examples of loss <math>\ \frac {1}{2}</math> gives same weight as one example of loss 1 and one example of loss 0 (Analysis still oes through).
 
== Extensions ==
- "Bandit" problem <br />
- Efficient algorithm for some cases with many experts.<br />
- Sleeping experts/"specialists" setting.
 
== See also ==
* [[machine learning]]
* [[weighted majority algorithm]]
* [[game theory]]
 
== Further reading ==
* [http://www.csd.uwo.ca/~bma/CS873/weighted-majority.pdf Weighted Majority & Randomized Weighted Majority]
* [http://www.cs.cmu.edu/~avrim/ML04/lect0120.pdf Avrim Blum (2004) machine learning theory]
* [http://www.cs.princeton.edu/courses/archive/spr06/cos511/scribe_notes/0330.pdf Rob Schapire 2006 Foundations of Machine Learning]
* [http://www.cs.cmu.edu/~avrim/ML98/lect0121 Predicting From Experts Advice]
* [http://www.wisdom.weizmann.ac.il/~naor/COURSE/AGT/agt_dec_24th_regret.ppt Uri Feige, Robi Krauthgamer, Moni Naor. Algorithmic Game Theory]
 
[[Category:Machine learning algorithms]]

Latest revision as of 19:20, 17 September 2013

The Randomized weighted majority algorithm is an algorithm that belongs to the machine learning theory. It minimize the mistake-bound of the weighted majority algorithm. For example, each day we get a prediction from some number of experts about whether the stock market is up or down.Then we use them to make our prediction. Our goal is to do nearly as well as best of the experts in hindsight.

Motivation

In machine learning, the weighted majority algorithm(WMA) is a meta-learning algorithm which "predicts from expert advice".

The algorithm:

initialize all experts to weight 1.
for each round:
    poll all the experts and predict based on a weighted majority vote of their predictions.
    cut in half the weights of all experts that make a mistake.

Suppose there are n experts and the best experts made m mistakes. the weighted majority algorithm makes at most 2.4(log2n+m) mistakes.

Randomized weighted majority(RWMN)

2.4(log2n+m) not so good if the best expert makes a mistake 20% of the time. can we do better? yes. we'd like to improve dependence on m. Instead of predicting based on majority vote, we use the weighted as probabilities - Randomized weighted majority. If wi is the weight of expert i, W=iwi, then we will follow expert i with probability wiW. Our goal is to bound the worst-case expected number of mistakes, assuming that the adversary(the world) has to select one of the answers as correct before we make our coin toss. Why is this better in the worst case? Idea: worst case for determinisric algorithm(weighted majority algorithm) was when weights split 50/50.But, now it is not so bad since we also have 50/50 chance of getting it right. Also, to trade-off between dependence on m and log2n, we will generalize to multiply by β<1, instead of necessarily 12.

Analysis

At the t-th round, define Ft to be the fraction of weight on the wrong answers. so, Ft is the probability we make a mistake on the t-th round. Let M denote the total number of mistakes we made so far. Furthermore, we define E[M]=tFt, using the fact that expectation is additive. On the t-th round, W becomes W(1(1β)Ft). Reason: on Ft fraction, we are multiplying by β. So, Wfinal=n*(1(1β)F1)*(1(1β)F2)...
Let's say that m is the number of mistakes of the best expert so far. We can use the inequality Wβm. Now we solve. First, take the natural log of both sides. We get: mlnβln(n)+tln(1(1β)Ft), Simplify:
ln(1x)=xx22x33..., So,
ln(1(1β)Ft)<(1β)Ft.
mlnβln(n)(1β)*tFt
Now, use E[M]=tFt, and the result is:
E[M]mln(1/β)+ln(n)1β
Let's see if we made any progress:

If β=12, we get, 1.39m+2ln(n).,
if β=34, we get, 1.15m+4ln(n).
so we can see we made progress. Roughly, of the form (1+ϵ)*m+ϵ1*ln(n).

Uses of Randomized weighted Majority(RWMN)

Can use to combine multiple algorithms to do nearly as well as best in hindsight.

can apply Randomized weighted majority algorithm in situations where experts are making choices that cannot be combined (or can't be combined easily).For instance, repeated game-playing or online shortest path problem.In the online shortest path problem, each expert is telling you a different way to drive to work. You pick one using Randomized weighted majority algorithm. Later you find out how well you would have done, and penalize appropriately. To do this right, we want to generalize from just "losS" of 0 to 1 to losses in [0,1]. Goal of having expected loss be not too much worse than loss of best expert.We generalize by penalize βloss, meaning having two examples of loss 12 gives same weight as one example of loss 1 and one example of loss 0 (Analysis still oes through).

Extensions

- "Bandit" problem
- Efficient algorithm for some cases with many experts.
- Sleeping experts/"specialists" setting.

See also

Further reading