PDA

View Full Version : A Simple Decision Engine


Duxwing
25th-June-2013, 06:32 AM
V = p1(q1) + p2(q2) + p3(q3) ... + pn(qn)

Where V is the value of an action, qn is the value of the nth possible exclusive outcome of an action, and pn is the probability of qn occurring.

If the nth outcome is favorable, then qn is positive; if it is meaningless, then qn is zero; if it is unfavorable, then qn is negative.

Logic: Take all and only those actions whose V < 0

Note that q1 through qn comprise not only all individual, material outcomes, but all the combinations of those outcomes such that each combination excludes all others.

Comparison with standard risk-cost-benefit analysis decision engine: my engine is general case of the standard engine, and input is simpler and more intuitive, e.g.,

V = 90% Butter + 85% Milk + 95% Eggs - 1% Zombpocalyse

Rather than:

V = B + pb(B) - pl(r) - c and all the intervening math.

I've developed this model myself, but I'm sure that brighter, earlier minds than mine have already derived it and perhaps published huge, thick volumes about it. I just wanted a simple way to calculate whether an action is a good idea or not. Please don't eat me. :(

-Duxwing

EDIT: Comments and criticism are welcome, though!

walfin
25th-June-2013, 10:11 AM
Isn't this just expected value?

Anyway, how do you derive qn?

Chad
25th-June-2013, 12:07 PM
V = p1(q1) + p2(q2) + p3(q3) ... + pn(qn)

Where V is the value of an action, qn is the value of the nth possible exclusive outcome of an action, and pn is the probability of qn occurring.

If the nth outcome is favorable, then qn is positive; if it is meaningless, then qn is zero; if it is unfavorable, then qn is negative.

Logic: Take all and only those actions whose V < 0

Note that q1 through qn comprise not only all individual, material outcomes, but all the combinations of those outcomes such that each combination excludes all others.

Comparison with standard risk-cost-benefit analysis decision engine: my engine is general case of the standard engine, and input is simpler and more intuitive, e.g.,

V = 90% Butter + 85% Milk + 95% Eggs - 1% Zombpocalyse

Rather than:

V = B + pb(B) - pl(r) - c and all the intervening math.

I've developed this model myself, but I'm sure that brighter, earlier minds than mine have already derived it and perhaps published huge, thick volumes about it. I just wanted a simple way to calculate whether an action is a good idea or not. Please don't eat me. :(

-Duxwing

EDIT: Comments and criticism are welcome, though!

The formula seem simple enough but what is the difference between you formula and say a bar chart that can present the same information more visually.

Then again I may be missing the point of the formula all together.

Duxwing
25th-June-2013, 01:06 PM
Isn't this just expected value?

Anyway, how do you derive qn?

Expected value feeds into the logic, and qn is subjectively determined.

And Chad, my formula boils that entire graph down into a single number, which you can use either by yourself or with a computer.

-Duxwing

Cognisant
25th-June-2013, 04:04 PM
Your inputs are very abstract, what exactly are you testing for and how are you testing for it?
Yes I'm turning this into an AI discussion.

http://m.youtube.com/watch?v=lg8fVtKyYxY

So you get the difference between choices and calculations, traditional procedural AI uses algorithms to make calculated choices and while this can be effective, for instance pilots don't guesstimate how far their fuel will get them (least you hope not) there's still the unfortunate problem that simulated reality is always going to be running behind real reality, and presumptive simulations are only as accurate as the presumptions they're based upon. Thus the challange for anyone interested in developing practical AI is to design a mathematical model that dosen't just guess but makes educated guesses and constantly updates its assumptions based upon situational feedback.

So getting back to my opening questions, what is your decision engine testing for and how are you testing for it, because if you use the wrong information to make your decision or the wrong method for determining the what data is relevant to making your decision then all the calculation in the world won't change the fact that you're barking up the wrong tree.

TheHabitatDoctor
25th-June-2013, 04:15 PM
You should look into modeling Markov chains and networks.

The problem with the overarching idea lies within "value," hence my earlier (months ago) arguments that economics should be based on physics. Howard Odum ftw. This is where agent-based models come into play.

(The second, less problematic problem is the sporadic distribution and therefore sporadic valuation of resources, and the whole unknown factor regarding wtf is under our feet and above our heads).

scorpiomover
25th-June-2013, 04:38 PM
I've developed this model myself, but I'm sure that brighter, earlier minds than mine have already derived it and perhaps published huge, thick volumes about it. I just wanted a simple way to calculate whether an action is a good idea or not.Yes, as walfin pointed out, this is the basic Statistical Expectation.

The Central Limit Theorem points out that the more events that we deal with, the more their average approaches the expectation. However, in any individual case, the results are largely random. This has some big consequences. Suppose that we consider taking a journey by flying in an aeroplane. The chances of dying on a single plane flight can be as low as 1 in 20 million. In theory, this should mean that on any one flight, you'll never die. But if that was true, then no-one would die on any single plane flight, and then the chances would be zero out of 20 million, and the stats on plane crashes would be "zero casualties". Obviously that is not the case. So what that shows, is that although the chances of any one person dying on any one flight, is incredibly low, if we look at 1 trillion passengers on single flights, on average, 50,000 would have died.

You'd also need to consider the long-term effects of an action. There may be 1% chance of a Zombacopalyse. But if it happens, then your chances of dying increase massively. If you die, then the Monty Hall problem comes into play. Once the "death event" happens, then it changes all your potential futures automatically. So a "value" has to be weighted, not just by what it has value of in the present, but also all the consequential changes to ALL the values, from then on, and depending on how easily it can be changed back, and the probabilities of it being reversed. Plus, you're still gambling.

Feelers tend to be much better at making a lot of real-life decisions, such as in dating, and in other situations where certain types of interactions with others yields a much more positive result, because the mechanism that feelings work with, are designed with a type of decision-making algorithm similar to the one you suggested, but with a continually optimising risk analysis, which gives it a much more optimised result.

Please don't eat me. :(I have no desire to "eat you". I prefer to not eat "human".

EDIT: Comments and criticism are welcome, though!You are thinking. Keep it up.

Duxwing
25th-June-2013, 08:34 PM
Your inputs are very abstract, what exactly are you testing for and how are you testing for it?

My inputs are bounded random variables with known probabilities (think of many sided dice) and my values are either subjectively determined or already numerable (think of money). For example, you might say that one ice cream cone is worth ten grapes.


Yes I'm turning this into an AI discussion.

http://m.youtube.com/watch?v=lg8fVtKyYxY

So you get the difference between choices and calculations, traditional procedural AI uses algorithms to make calculated choices and while this can be effective, for instance pilots don't guesstimate how far their fuel will get them (least you hope not) there's still the unfortunate problem that simulated reality is always going to be running behind real reality, and presumptive simulations are only as accurate as the presumptions they're based upon. Thus the challange for anyone interested in developing practical AI is to design a mathematical model that dosen't just guess but makes educated guesses and constantly updates its assumptions based upon situational feedback.


Or, in other words, to develop an artificial intuition.


So getting back to my opening questions, what is your decision engine testing for and how are you testing for it, because if you use the wrong information to make your decision or the wrong method for determining the what data is relevant to making your decision then all the calculation in the world won't change the fact that you're barking up the wrong tree.

I'm not testing for anything. How is "testing" relevant?

-Duxwing

BigApplePi
25th-June-2013, 10:40 PM
V = p1(q1) + p2(q2) + p3(q3) ... + pn(qn)

Where V is the value of an action, qn is the value of the nth possible exclusive outcome of an action, and pn is the probability of qn occurring.

If the nth outcome is favorable, then qn is positive; if it is meaningless, then qn is zero; if it is unfavorable, then qn is negative.

Logic: Take all and only those actions whose V < 0

Note that q1 through qn comprise not only all individual, material outcomes, but all the combinations of those outcomes such that each combination excludes all others.

Comparison with standard risk-cost-benefit analysis decision engine: my engine is general case of the standard engine, and input is simpler and more intuitive, e.g.,

V = 90% Butter + 85% Milk + 95% Eggs - 1% Zombpocalyse

Rather than:

V = B + pb(B) - pl(r) - c and all the intervening math.

I've developed this model myself, but I'm sure that brighter, earlier minds than mine have already derived it and perhaps published huge, thick volumes about it. I just wanted a simple way to calculate whether an action is a good idea or not. Please don't eat me. :(

-Duxwing

EDIT: Comments and criticism are welcome, though!
I have an intuition for a critique of this which will bring something into doubt (or maybe I just don't get the formula). To get my intuition to come forth, can you give a typical and easy to understand example, say with p1(q1) + p2(q2) + p3(q3) so I know what you typically have in mind? I'm a little reluctant to come up with my own example or fear my own won't fit what you think is typical.

Brontosaurie
25th-June-2013, 10:47 PM
do what brings more good than bad? sure.

Cognisant
25th-June-2013, 10:54 PM
I'm not testing for anything. How is "testing" relevant?
Err, sorry, I mean what kind of decision is you decision engine specifically made for?

If you we're doing AI then you would be testing for relevance to a specific goal so with a feedback loop the program trains itself to produce the appropriate output to get the input you want, like how the inputs of pain & pleasure guide us to forming self serving behaviours, but that's not what you're doing.

Duxwing
25th-June-2013, 11:21 PM
Err, sorry, I mean what kind of decision is you decision engine specifically made for?

To choose whether to act or not to act from a list of given actions and outcomes.


If you we're doing AI then you would be testing for relevance to a specific goal so with a feedback loop the program trains itself to produce the appropriate output to get the input you want, like how the inputs of pain & pleasure guide us to forming self serving behaviours, but that's not what you're doing.

Yep.

An example of my system in action would be a bet: should you bet $1 for a one in one-million chance of winning one million dollars? Most people would say yes, a few would say no, but the real answer is that the bet won't change anything:

V = p1q1 - p2q2
p1 = 1/1,000,000
q1 = 1,000,000
p2 = 1
q2 = 1

Therefore V = 1/1,000,000 * 1,000,000 - 1*1
= 1 - 1
= 0

But the cool part about this system is that you decide whether a bet with n possible mutually exclusive outcomes is profitable or not.

-Duxwing

BigApplePi
25th-June-2013, 11:27 PM
An example of my system in action would be a bet: should you bet $1 for a one in one-million chance of winning one million dollars? Most people would say yes, a few would say no, but the real answer is that the bet won't change anything:

V = p1q1 - p2q2
p1 = 1/1,000,000
q1 = 1,000,000
p2 = 1
q2 = 1That is good and accurate. But your model went to p3p3, p4/q4. Need a more extended example to represent practical choices in life.

Duxwing
26th-June-2013, 12:21 AM
That is good and accurate. But your model went to p3p3, p4/q4. Need a more extended example to represent practical choices in life.

Perhaps you've misread: my model goes to n terms, not four terms.

-Duxwing

BigApplePi
26th-June-2013, 02:54 AM
V = p1(q1) + p2(q2) + p3(q3) ... + pn(qn)

Where V is the value of an action, qn is the value of the nth possible exclusive outcome of an action, and pn is the probability of qn occurring.
Okay. Looks like I'm going to have to come up with an example. Don't know.

Let's say a guy is looking for a girl, object romance. He is starting from scratch. V = successful romance, value high.

q1 = look presentable, p1 = close to 1;
q2 = hang out with contacts, p2 = .5;
q3 = make the party scene, p3 = .4
q4 = get sick, p4 = .1;
q5 = be pessimistic, p5 = x;
q6 = be optiimistic or neutral, p6 = (1-x);
q7 = meet girl and say hello, p7 = not independent;
q8 = if not rebuffed ask for date, p8 = y;

Am I doing this right? I don't think so as these are dependent probabilities.

Duxwing
26th-June-2013, 03:03 AM
Okay. Looks like I'm going to have to come up with an example. Don't know.

Let's say a guy is looking for a girl, object romance. He is starting from scratch. V = successful romance, value high.

q1 = look presentable, p1 = close to 1;
q2 = hang out with contacts, p2 = .5;
q3 = make the party scene, p3 = .4
q4 = get sick, p4 = .1;
q5 = be pessimistic, p5 = x;
q6 = be optiimistic or neutral, p6 = (1-x);
q7 = meet girl and say hello, p7 = not independent;
q8 = if not rebuffed ask for date, p8 = y;

Am I doing this right? I don't think so as these are dependent probabilities.

You're close. Think of it this way:

I have a coin. If I get heads, then I get a dollar, and if I get tails, then I lose a dollar. The value of flipping the coin is therefore .5(1) - .5(1) or zero. Now imagine a six sided dice: if I roll a one, then I lose three dollars; if I roll a two, then I lose two dollars; if I roll a three, then I lose one dollar; if I roll a four, then I get one dollar; if I roll a five, then I get two dollars; if I roll a six, then I get three dollars. The value of rolling the dice is therefore 1/6 * -3 + 1/6 * -2 +1/6 * -1 + 1/6 * 1 + 1/6 *2 + 1/6 * 3, or zero again.

See the idea?

-Duxwing

BigApplePi
26th-June-2013, 04:12 PM
I have a coin. If I get heads, then I get a dollar, and if I get tails, then I lose a dollar. The value of flipping the coin is therefore .5(1) - .5(1) or zero. Now imagine a six sided dice: if I roll a one, then I lose three dollars; if I roll a two, then I lose two dollars; if I roll a three, then I lose one dollar; if I roll a four, then I get one dollar; if I roll a five, then I get two dollars; if I roll a six, then I get three dollars. The value of rolling the dice is therefore 1/6 * -3 + 1/6 * -2 +1/6 * -1 + 1/6 * 1 + 1/6 *2 + 1/6 * 3, or zero again.
This is a good example of how to operate on the ground floor. Notice all the possibilities are covered. Notice all are independent happenings. That is, if one happens the other can't. What happens if we introduce time ... or steps/stages where the 2nd happening depends on the outcome of the first? Then things rapidly get complicated, but this is real life and why we can't predict with certainty.

Nevertheless we make plans anyway. Next, how can we create, "A Complex Decision Engine."

Duxwing
26th-June-2013, 05:50 PM
This is a good example of how to operate on the ground floor. Notice all the possibilities are covered. Notice all are independent happenings. That is, if one happens the other can't. What happens if we introduce time ... or steps/stages where the 2nd happening depends on the outcome of the first? Then things rapidly get complicated, but this is real life and why we can't predict with certainty.

Nevertheless we make plans anyway. Next, how can we create, "A Complex Decision Engine."

I was well aware that my model isn't nearly good enough to handle Chaos Theory.

-Duxwing