AbstractsComputer Science

Improving Game Design through Responsive Configuration and Procedural Generation

by Stephen P. Landers




Institution: Ohio University
Department: Computer Science (Engineering and Technology)
Degree: MS
Year: 2014
Keywords: Computer Science; Educational Software; Technology; game design; procedural generation; problem space searching
Record ID: 2025617
Full text PDF: http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1407252394


Abstract

One difficulty in game design is that the criteria that determine whether a certain game feature (e.g., ability to jump, theme type, inclusion of certain types of weapons) would improve enjoyment of a game are often unclear, and can differ greatly for different players. This thesis investigates a method by which game configuration and creation might be automated in such a way that a numerical rating could be assigned to any given game feature, thereby allowing the enjoyability of a game feature to be gauged in a more objective way. This automated game design method involves:1) Creation of a series of candidate games.2) A player playing the games.3) The player rating the games.4) A best and a worst game for each player being made based on this feedback.A tool using this method is used in an experiment to determine if game features can successfully be rated in this way by having subjects use the tool and then checking if the games the tool produces are rated, on average, significantly more highly than the candidate games. By allowing the aspects of a game to vary and by rating user enjoyment of games created using those aspects, it may be numerically determined what sorts of game will most appeal to certain users or groups of users, and better games can be created. By including in this process a tool that can automatically generate a game based on these aspects, this process can be automated and expedited, which would be beneficial for game prototyping or for making games whose configuration automatically responds to the user. The results showed support for the usefulness of the proposed method; the average rating difference between the "worst" and "best" game was 23%, with the "worst" game being rated on average worse than randomly selected games, and the "best" game being rated on average better than randomly selected games.