Bayesian Belief and Decision Networks |
|
By now you should be familiar with graph creation and querying with belief networks. Creating a decision problem is not much more difficult. First, go to the 'Options' menu and select 'Show Decision Net Controls'. You should see the options change slightly, and 'Solve' mode will have two new buttons pertaining to decision nodes.
Try creating a node somewhere on the graph. The Node Properties dialog will have an extra dropdown menu to allow you to select the node type. By default, nodes are created as regular nodes. Notice that when you select 'Value' as the type, you can no longer enter any domain information for the node. This is because value nodes only have a single domain element. You can only create one value node in the graph.
Also notice that you can change the utility table for a value node just as you can change the probability table for a regular node. Decision nodes have no probability tables, but you will have the chance to modify the decision functions later on.
Create a decision network of your own if you like, or load an existing decision problem to query. If you load a decision problem, decision net controls will automatically be enabled.
The first thing to do with your decisions is to ensure that all decision nodes relevant to the value node are ordered. This means that there should be a path from one of them to the value node that goes through each of them exactly once. Once this is done, click 'Add no-forgetting arcs'. This has already been done in the pre-existing problems, but you can go back to 'Create' mode to delete some edges and see what happens.
Below is the Decision Fire Alarm Problem - in the first picture, it is missing an arc. The second picture is the applet after adding the no-forgetting arcs. You can only optimize the decisions after all no-forgetting arcs are present.
You can now optimize the decisions, creating the optimal policy. You will be shown the expected value for that policy. If you optimize the decisions when in Verbose Query Mode, you will be able to see the variable elimination that gives you the information needed to maximize the decisions.
This policy is optimal, based on the observations currently in the network, but if you would like to see what effect changing the policy would have, click 'View/Modify Decision' and then click on a decision node. You will see a window like the one below, which was produced by inspecting the decision function for the variable call.
Change as many of the values as you like, click 'Ok', and then query the value node. You might see a different expected utility - if the expected utility has changed, it will be lower than the expected utility that was there before. You can also now query the decision nodes to see how likely it will be that the agent will make a particular decision. Notice that this probability is based on the decision function and the probabilities of the decision node's parents.
Finally, there are a few restrictions in place for decision networks. First, there may only be one value node. Also, the utility of that node and the probabilities of any nodes which have decision nodes as parents are undefined until the decision functions for their parents are defined. A final restriction is that a decision may not be optimized if one of its parents has an observed value. A decision function defines the agent's actions for any context. Thus, if some decisions are observed or have observed parents, you will not be able to optimize them.