GOAL:
This was a school group project with the objective to showcase a Python package. My team decided to utilize TensorFlow’s machine learning capabilities to create a program that would play the game, Rock Paper Scissors, with a user. This would involve training a convolutional neural network to use computer vision and detect the users moves, a GUI interface for the game, and reinforcement learning to allow the program to adapt to simple patterns in the users play and counter.
Personal Contributions:
I was tasked with handling most of the machine learning backend involved in the project. This included data collection, training of the computer vision model, and assisting with the reinforcement learning.
Collecting Data
As there were no online data sets that were suitable for our needs, I developed a short script to collect and process large amounts of data from a laptop’s webcam. The primary package for this was OpenCV. The script displayed a video feed similar to our game environment, with a box in the upper left that the user could play their move in. When a button was pressed training images were rapidly taken from the box. During this time, we made sure to vary hand position and orientation slightly, as well as run the script with numerous people, in order to ensure a diversified set of training and testing data.
After training some initial models, I realized we needed to make some adjustments to our data collection, in order to achieve our desired accuracy. We switched from Rock Paper Scissors to a new version of the game with hand signals we made up, called Bird Cow Snake. The symbols for this game are shown below. We chose these over the traditional Rock Paper Scissors symbols, because there was greater difference between them, which allowed the model to more easily differentiate between what symbol was being played.
I also used OpenCV to add Sobel edge detection to the images to remove effects from varying backgrounds in the webcam’s view. In addition to this, the images were resized to fit our models constraints. These effects were implemented in real time in the game as well.
Machine Learning Training
I was also responsible for the supervised learning to create our computer vision model. I primarily used the TensorFlow, Keras, and OpenCV packages inside a Google Colab notebook. The test images were gathered, labeled with an integer, and then their data was stored as a numpy array to allow for easier handling when testing. In order to optimize our time, I utilized transfer learning with the Keras “MobileNetV2” model which had already been trained on many images before. This was a convolutional neural network, whose classification layer I deleted before freezing its base layers. This meant that we could utilize its basic shape recognition layers before training a new classification layer for it with our own data, vastly increasing the accuracy of our model with limited amounts of training.
The final model can be seen above. It was trained with the keras Adam omptimer function for accuracy. As you can see below, the end product was a model that could detect in real time what move the user was playing.
In addition to the computer vision model, I also assisted another group member in implementing reinforcment learning to allow the model to adapt to basic patterns from the user. For example, if the user started to play a lot of “snakes” in a row, the model would adapt by favoring the counter move, in this case “bird”. This was done by creating a class that set up new environments and updated probability states for each move through matrix multiplication with a transition matrix of probabiliities in a Markov Chain. The computers move was then informed by the previous state.
Final Product:
Other members of the group contributed significantly to creating a fully functional GUI for the game, the actual game logic, and parts of the RL. As a result we ended up with a complete game, and a trusty friend to play rock paper scissors with.