Assignment #2: Knowledge Representation & Logic


Assignment #2 consists of 2 parts: The first part is about logic and logical proofs whereas the second part deals with knowledge representation. Total credit of this assignment is 100 points.  


Due date: Tue, April 12, 2011


Submitting Your Solution

Important:  Your submission for part 2a must include a jar file containing all the classes (except the ones in the provided framework) and source code detailing your approach. It is important that your source code is documented! (documentation will be part of the grading). Make sure that your jar works prior to submission, failure to provide a complete solution will lead to an invalid solution (will not be graded). Also, your submission has to include a README file that describes your solution, any problems that you have encountered, and any other details that your TA should know.

All material has to be submitted by e-mail to Lorenz Fischer by due date, 2 PM or be handed in in the lecture hall before the lecture starts. The subject of the email has to be PAI2011_A2.


  1. Click on the menu item "Build -> Build 'myagent.jar' artifact"
    (This will create a file named myagent.jar in the root folder of your project i.e. the folder where you unzipped the file from the setup into.)
  2. Make sure that the created jar file contains your source and the compiled class files.
  3. Rename the file to: {lastname}_{firstname}_pai2011_a2.jar
    (e.g. fischer_lorenz_pai2011_a2.jar), all characters lowercase
  4. Email it to Lorenz Fischer

Written Text

  • If you have your solutions on paper, bring that paper to class.
  • If you have your solutions in PDF, send them to Lorenz Fischer along with the jar file. Please send all parts of your solution in one single email message. Don't put any solutions into the text of your message.

Part 1: Logic (30)

Part 1a (15)

Given the following clause forms, prove whether there exists an x ∈ {John, Mike, Tom} that satisfies both Climber(x) and ¬Skier(x)? If it exists, who is x?

  1.  Alp(Tom)
  2.  Alp(Mike)
  3.  Alp(John)
  4.  ¬Alp(x) ∨ Skier(x) ∨ Climber(x)
  5.  ¬Skier(x) ∨ Like(x, Snow)
  6.  ¬Climber(x) ∨ ¬Like(x, Rain)
  7.  ¬Like(Tom, y ) ∨ ¬Like(Mike, y )
  8.  Like(Tom, y ) ∨ Like(Mike, y )
  9.  Like(Tom, Rain)
  10.  Like(Tom, Snow)

What to do:

  1. Write down your proof step by step and say in each step which clause you've applied.
  2. Hand in the paper (or send an email) that contains your solution.

Part 1b (15)

The unicorn is a mammal if it is horned. If the unicorn is either immortal or a mammal, then it is horned. If the unicorn is mythical, then it is immortal, but if it is not mythical, then it is a mortal mammal. 

What to do:

  1. Encode these statements in propositional logic.
  2. Use resolution to prove that the unicorn is a mammal.
  3. Hand in the paper (or send an email) that contains your solution of 1. and 2.



Part 2: Knowledge Representation (70)


In Assignment 1 you implemented a mario agent using a search strategy. As you have probably seen, the winning agent from 2009's Mario Competition, the A*-Agent achieves a pretty stunning performance, by simulating the physics engine of the game in each step and choosing the optimal way for Mario to run. While this solution is certainly elegant and yields very good performance, it might not be feasible to implement for certain problems. Another way of programming "intelligence" into a system is through rule systems. In this assignment you're going to implement a new agent using a rule engine and compare Mario's behaviour using various sets of rules.

Hint: Before starting work on the rule based agent, setup you're project in a way which allows you to run both, your solution from assignment 1 and your new rule-based agent. If you're not too experienced with the IntelliJ IDE and you don't know how to set things up in one workspace, you can always create a copy of the whole project folder.

Part 2a (40)

This is how you should go about implementing this:
Build a knowledge base containing facts about the current state of Mario's environment. Then, you add rules to the knowledge base which - based on the current environment observations - will infer the state of the action keys which define Mario's movements. We provide you with two sets of rules which we collected by recording the gameplay of the A*-Mario-Agent and generalizing it's movement decisions. The two sets have differing amounts of attributes (see below).

In each call of the getAction() method you will have to feed your knowledge base with the current state of the observation variable. After that you query the engine for the various buttons available to control Mario and use this information to build the return value of the getAction() method.

The two rule files you should use for your agent have the same structure but have been generated using differing amounts of data. As said earlier, these rules have been derived by recording the returned actions of the A*-Agent for the given environment observations. In one case we took all observations around Mario in a square of 13 by 13 fields into account while in the other case we only recorded the observations in a 5 by 5 square. Then, we used the datamining software Weka to generalize rules based on these observations. You have to encode these rules in your agent:

  1. rules_astar_13.txt
  2. rules_astar_5.txt

These files each contain 5 sections, one for each key. See below for an example:

JRIP rules for KEY_SPEED:

(CELL_12_10 = 0) and (CELL_11_13 = 20) and (CELL_10_13 = 20) => KEY_SPEED=0 (98.0/43.0)

(mayMarioJump = 0) and (CELL_12_13 = -10) and (CELL_12_9 = 0) and (CELL_11_13 = -10) => KEY_SPEED=0 (57.0/27.0)

 => KEY_SPEED=1 (12534.0/1397.0)

Number of Rules : 3

After the title one or more lines with rules folow. Each rule has the form LHS => RHS. Please note note that the LHS (Left-Hand Side) could be empty, which means that no conditions are needed and this rule always fires. You can use this rule as the default for the given action key.

The rule files contain the following attributes:





-128 to 128

These are the observations of the array retrieved by calling the method Environment.getCompleteObservation()


0=false 1=true

The value retrieved by calling Environment.isMarioOnGround()


0=false 1=true

The value retrieved by calling Environment.mayMarioJump()


0=false 1=true

The value retrieved by calling Environment.isMarioCarrying()


0, 1, or 2

The value retrieved by calling Environment.getMarioMode()


0=false 1=true

The value in the return value array at position Mario.KEY_LEFT.


0=false 1=true

The value in the return value array at position Mario.KEY_RIGHT.


0=false 1=true

The value in the return value array at position Mario.KEY_DOWN.


0=false 1=true

The value in the return value array at position Mario.KEY_JUMP.


0=false 1=true

The value in the return value array at position Mario.KEY_SPEED.

The reasoner we use is part of the "Jena Semantic Web Framework". Download Jena from You will have to add to your project dependencies the following jar files: jena-2.6.4.jar, log4j-1.2.13.jar, slf4j-api-1.5.8.jar, slf4j-log4j12-1.5.8.jar, xercesImpl-2.7.1.jar (note: use the versions recommended on this page - if there is a typo, please let us know).

Find a simple example here we made to show you how to set up a  knowledge base in Jena and fill it with rules and facts - and infer new facts.
Find more information on the reasoner here.

What might also be of interest to you are the following two classes:

  • This is a simple frame that contains a table and a textfield, which show how mario perceives his surroundings.
  • If you want to use the debug frame, you need to replace your version of with this version.

What to do:

Implement a Mario-Agent using a knowledge base and a set of rules. 

Please note: As with assignment 1 your work will not be graded according to the score that your agent achieves in the simulation, but on how you implemented and documented your solution.

Part 2b (30)

In this part you need to add your own rules. How do you do that? First you need to create "features": for example, "a gap" or "a turtle" taking into account their relative position on screen (i.e., "top left", "left", etc). The rules have to be written using the same syntax as in part 2a. You will have to write methods that will detect your features and rules that "tell" Mario what to do in case of detected features.

What to do:

  1. Define the features and the positions considered (left, right, top, center, ... etc).
  2. Create java code that detects the features
  3. Create your own rules based on the features you defined
  4. Compare the performance of your Mario agent with the previous implementations (the one from assignment 1, the two from part 2a and this one) and write up one paragraph with your explanation.
  5. When you export your jar file, make sure that the "active" configuration of your rule-based Mario is the one with the rules you created in 2b.




  1. Format of submission: to avoid the situation where your assignment is not graded use the email header: PAI2011_A2
  2. Package your submission into a zip file named {lastname}_{firstname}_pai2011_a2.(zip/tgz/bz2).
    1. Contents of the zipped submission:
      1. JAR file{lastname}_{firstname}_pai2011_a2.jar
      2. README file
      3. PDF/DOC file{lastname}_{firstname}_a2.(pdf/doc)
  3. Strictly follow the package names and class name of your agent:
    1. package: my_agent
    2. class: MyAgent
  4. The assignments are intended as individual work, however if you collaborate with your colleagues, you must clearly specify your individual contribution in the README file, or you risk not getting the credits for that section.