Creating My First AI game

Okay, this was course work so I won’t go into too much details about what the game was or how exactly the code was written (We wouldn’t want anyone to just copy it right 😉 ), but I’ll go over the concepts introduced. I had so much fun working on this 😄

The Game

Essentially it’s a made-up two-player board game, where each player aims at eliminating all the opponent player’s pieces. The constraint is that every AI move must be selected in under 3 seconds. At each turn, the AI must decide what the best move is for the current player.


First I generate a tree where the root node is the current game state. Then the children are all the possible game states that can be created from the current one, given that only the current player can make a move. Then we switch turns and keep generating children recursively (using depth-first). We can go on forever (until we’ve reached endgames) but of course… ain’t nobody got time fo’ dat! We only have 3 seconds. So I set a constant depth limit of 3 levels down the tree for example where the search tree stops expanding.

Now we implement something called mini-max. After evaluating the leaf nodes of the tree (we’ll talk about evaluation later), the values need to be propagated up the tree until only one remains and the next move is picked (bottom-up). To do that, we assume that the player who’s turn it is (levels 0-2-4 in the image below) will always try to maximize the outcome, while their opponent (levels 1-3-etc.) will always try to minimize the result for the max player’s move. So at each level, the nodes will pick from their children the one that is either the maximum (for max’s level) or minimum (for min’s level) of all children.

Example of values being propagated up the tree. Level 3 nodes signal Min's turn, so the minimum of the children is selected. Level 2 is Max's turn, so the maximum of the children is selected. And so on. Image source:

Alpha-Beta Pruning


Cool! But sometimes parts of the tree are irrelevant to evaluate and can be cut out to save time, in the case where there’s no chance that a maximizing (or minimizing) node would ever even consider to choose them.


That happens a lot and it is called alpha-beta pruning. You can visualize alpha-beta (and minimax) on the following link courtesy of the University of Berkeley:

Let’s take a closer look at the following image. We can see that a branch has been pruned because its value will not change the result that will eventually be brought up. If the value is less than the value of the parent node, 15, (yes we see that it is, but we theoretically haven’t calculated it at this point; we are going depth-first), then the max node (parent) will not select it. If that value is more than the value of the parent node, that still doesn’t matter because the parent’s parent (the min node above), will never select a value higher than -11 because it wants to minimize the result. So that branch is pruned; no matter what its value is it will not be useful.

Branches can be 'pruned' to save execution time and explore other parts of the tree.

Note: If the branching factor at that node was more than 2, then every one of the extra branches would also have been pruned.

Iterative Deepening

Another Problem…

We now have the ability to go deeper down the tree in the same amount of time, because parts of the tree are completely ignored. But we still have another problem though… or lack of efficient resource usage. Depending on the game state, sometimes there are more children to explore and thus the tree is way wider (higher branching factor). This means that it takes more time to go to the same depth and keeping a constant depth to explore throughout the whole game is not efficient. At depth level 3 for example the average execution time is 0.2 seconds, with no move using more than 3 seconds. Sometimes depth 4 takes 0.3 seconds, which is awesome, but at certain game states it takes 11 seconds to go down to the same level. That’s a huge discrepancy and you want to make sure you always use the amount of time you are given (3 seconds) without exceeding it and without underusing it either (the deeper you go the more informed your decision is).


To solve this, you can sometimes go deeper in order to use up to 3 seconds, and sometimes you need to cut it short because continuing to a certain depth means it’ll take too long to compute. So basically adapt the explored depth based on the game state, and I do that with a technique called iterative deepening.

Iterative deepening is basically depth-first search, but with an increasing depth limit. We start our search at a depth limit of 1, and once we are done we increment the depth level and start the search again. Keep in mind that the children of previously explored nodes will not change if they have already been generated, so the actual time it takes for iterative deepening isn’t that much worse than exploring a fixed depth, especially if your evaluation function isn’t too expensive. I add a timer to ensure the threshold of 3 seconds is never broken. When the allocated time runs out, whatever value we have brought up last can be selected right away. The result is that the depth increases as the branching factor lowers and we are left with a varying depth explored over the course of a match. This for obvious reasons was a game-changer, maximizing the time resource while exploring deeper levels overall. The image below is a small sample of the time in seconds for a decision to be made at each turn, and the depth-level explored at that turn. An average of 5 levels explored over the course of a game!

Partial sample of a game. On the left, the time in seconds for a move to be selected. On the right the depth-level explored.

Note: The average execution time is always around 2.85 seconds because that was my hard limit to make sure that 3 seconds will never ever be surpassed (better safe than sorry 😌).

Depth-Preferred Transposition Table

I mentioned that we can keep the children in memory with iterative deepening to avoid having to regenerate them, but we also have the possibility of encountering the same game state multiple times in the same tree. So we can save the calculated value of a node in order to reuse it if needed. For this we use a depth-preferred transposition table. A calculated value that is further away from a leaf node (higher up the tree relative to the depth explored) is preferred because its value is more informed, it is brought up from a longer chain of events. This allows us the possibility of having a leaf node informed to lower depths that technically haven’t been visited but can simply be replayed. We know that the children of a given game state will always be the same if it is the same player’s turn.

Heuristic - Evaluation Function

Finally, you know that “calculated value” that we have been bringing up the tree this whole time? Well that’s called the heuristic value which is found by running the heuristic function at a game state. The Heuristic function is the actual algorithm that calculates how good or bad a given game state is. It runs on our terminal nodes (tree leaves), wherever they may be. This is 100% dependent on the actual game so it’ll be useless to explain my heuristic without knowing how the game works exactly. But for a heuristic to be good it requires a deep analysis and understanding about how the game works, what positions are better/weaker and what tactics are beneficial over time.


Anyways I had so much fun working on this. Part of the reason why this was so exciting for me is that I play Chess all the time (seriously, every single day) and now that I have researched and implemented my own system, I understand how chess AI players work. Of course top board game AIs do way more advanced stuff that you can read all about online, but knowing the building blocks behind something you use every day is really awesome!

I was hoping to keep this as short as possible, so I tried to simplify things a bit (How did that work out? 😬).

Since you made it this far… have a wonderful day 😊

Mentoring Hackers @ BlocHacks Hackathon

BlocHacks - Hackathon for Social Good

Last weekend I gave my first live coding workshop at BlocHacks and I also participated as a mentor. BlocHacks is a hackathon for social good organized by the team at DevBloc, the social innovation catalyst at The Refugee Centre. Check ‘em out: The challenge given at the beginning of the hackathon was provided by the United Nations Refugee Agency (UNHCR) and gave the hackers a lot to think about while designing their solution.


During the course of the 24 hour hackathon there were multiple workshops presented to the hackers. There was one by Google, one by IBM, and one by… me & my teammates! 😅 We were tasked with giving an introduction on building a chatbot, using React Native & IBM Watson. I was in charge of the React-Native part & my teammate Chris was responsible for the Watson part.

The room was packed as attendance was around 30 people; an amazing turnout - more than we expected! 😄 I began from scratch and started a React-Native project with Expo using CRNA. I was casting both the mobile phone & the code editor to the projector and enabled hot-reloading so that everyone in the room can see the progress & debug problems in real-time. The first part was to introduce the basics of React-Native and build a component with basic styling to display chat messages and an input box, using the state to store all messages. The second part involved using lifecycle methods and callbacks to make API calls to the Watson service and display message responses on the screen. The result is very simple (See it on github), but the main challenge, given that we had a time constraint, was to balance the amount of time spent on explaining core concepts about the language and technologies being used, versus actual implementation and focusing on the goal of building a chatbot. In addition, the level of experience of attendees ranged from non-developers to seasoned React developers, so I didn’t want anyone to get bored or to feel lost at any point.

Live coding was a cool experience. I had existing code available for reference in case something unexpected happened, which I felt was a solid backup plan to avoid freezing during the live presentation. Some bugs did appear but I knew how to fix them right away and explained the problem & solution to everyone, which was cool. This provided a glimpse to the attendees of some common issues faced when working with React-Native.

Anyways, it went really well. I deployed the app to Expo and displayed the QR code on the screen for everyone to try it out themselves on their mobile phones (hurray for cross-platform!). This added a more interactive feel to the workshop which I feel was beneficial; showing everyone that in less than an hour they can have a working chatbot that can be used in their hack for example.


Now as I mentioned I also volunteered as a mentor for this hackathon. This was my first hackathon mentorship experience and I was worried that I would not be very helpful to the participants, I mean I’m just a student right?

The way it was structured was that there would be three mentor-hours during the course of the event where participants will be able to ask questions to the mentors. I put myself in the participants’ shoes and realized that it may seem kind of intimidating, people are usually reluctant to ask questions from fear that it would look “dumb” to the mentors (“experts”). From personal experience I feel like this sort of situation happens a lot, from classrooms to work environment, there’s this fear of sounding foolish in front of those seen as experts (mentors, project managers, etc.) as if seeking knowledge from others is a disservice to one’s own intelligence, weird isn’t it. ¯\_(ツ)_/¯

Anyways, we did not get called up by many teams when the announcement for the mentor-hour was made and so I decided to take a different approach. At random times, I went table to table, sat down with the teams and started talking to them in an informal manner about what they’re working on. Through these friendly conversations, we would identify blockers and think of ways to overcome them. This broke down the barrier between mentor & participant and created an environment where it was easier to communicate. I stuck around throughout most of the hack, and hackers were not hesitant in letting me know when they needed a second opinion or some advice, we became kind of like friends, it was really fun. 🙂

I ended up helping out in technical areas like Firebase, MySQL, Android, Solidity, React, etc. but also in terms of brainstorming and feedback about design and direction of their project. This actually helped me a lot in terms of self-confidence; let me elaborate. The tech field is so vast and ever-evolving that you always feel like you don’t have enough knowledge, because there is so much to learn and you only ever get to scratch the surface. As someone who recognizes that and is always looking to learn something new, its often difficult to venture deeply into a single topic. So this really made me realize that the breadth of experience and topics that I read and learn about are actually not to be taken for granted and should be appreciated by myself. I surprised myself in my own ability to help others, which really boosted my self-belief and encouraged me to continue learning every day. That’s really (selfishly?) one of my biggest takeaways from this mentorship experience. Assisting others overcome challenges in any way possible feels great, and I would love to do this again soon.

Thanks to the organizers for giving me this opportunity! 😊

My First Live Television Interview!

Breakfast Television

A couple of weeks ago I wrote about the opportunity I had to present our project to the high commissioner of the UNHCR. I also mentioned how I’d like to improve my public speaking skills. Well, today I had another chance to test that. Breakfast Television wanted to interview The Refugee Centre and talk about their social innovation catalyst initiative. I was asked to join and give a brief explanation of our app on live television.

The buildup was pretty exciting, I mean I’ve never been on live television so I wasn’t sure what to expect and we didn’t have much time to prepare (I knew about it only about 12 hours prior), but yeah I was super excited. We got there early and were given a quick rundown of how the interview will flow. After a quick touch-up (yes.. make-up 😜) we were brought behind the scenes where we got to see first-hand how the hosts at Breakfast Television do live shows. It was extremely impressive. Really fast-paced; moving from one area to another, one subject to another, and always staying calm and collected. Their flow in transitions is so smooth. I was super impressed by their work. 👍

Anyways, we sat at the interview desk and our interviewer, Derick (Check him out on Twitter), started having a regular friendly conversation with us. It was still during the commercial break so we weren’t live yet, but out of nowhere Derick suddenly started talking to the camera and introducing us. Having this conversation and flowing into the interview from it completely took away the nerves, it felt like a totally normal discussion and not an interview on live TV. He’s a very talented person. The interview went really smoothly and after about 5 minutes we were done and they moved on to other interviews like Andrew Scheer (the leader of the Conservative Party of Canada was there too, in person).

I definitely learned a lot through this short visit to the set and I have new found admiration for the work of live television hosts. Another great experience I’m thankful for.

Here’s a link to the interview (Note: No amount of make-up could help me 😆):

Demoing to the High Commissioner of the UNHCR

I’m volunteering with The Refugee Centre to work on this mobile app that aims to make the process of applying and reviewing refugee applications more efficient. The idea emerged when the number of asylum seekers increased dramatically over the summer (2017), creating a huge bottleneck and it was clear that some innovation was needed in this space.

Anyways, so this project is also supported by the United Nations Refugee Agency (UNHCR) and the commissioner Filippo Grandi was on his trip to Canada. The first stop for this big trip was The Refugee Centre in Montreal, where teams were tasked with demoing their projects to the commissioner himself as well as all the media that come with him. Pretty big deal. 😰

Demo Day:

On the day of the big demo, we had to show up early in the morning and make sure everything was set up. The stress was building up inside of me, but my teammates and I were confident and even added some pretty cool last minute features to the app (Yes I know, that’s risky). At 8 AM, the media crew from the UNHCR showed up and they were ready to listen to our demo. Filippo wasn’t there yet, but we went ahead and demoed the app. It went really smoothly, better than I expected and everything worked like a charm. So I thought to myself “this is it, I can finally sit back and relax, my work for the day is done”. Nope, it was now 9 am and Mr Grandi and his crew finally showed up.

Camera crews from various news organizations suddenly filled the place, there was Le Devoir, CBC/Radio-Canada, and others. I was called upon to redo the presentation, but I had shifted away from that mindset already and was kind of shaken up by all the extra presence. “Okay, stay calm, you nailed it once, just do it all over again”. I felt weird about repeating the same things given that some people were still present from the first demo, so I decided to switch things up on the fly during the presentation. That was a mistake. Things didn’t go quite as well, and I felt bad about getting it right in the “rehearsal” earlier on but changing things on the spot for the big one. Oh well, lesson learned ¯\_(ツ)_/¯.

A couple of other teams presented their project after us, and it went extremely well!


The aftermath was great though, we got interviewed by a couple of people and found the story of the visit on various platforms. All positive! It does feel good to know you’re working on something that will potentially solve a source of pain to a lot of people.

Here are a couple of the articles where the project was mentioned:


One thing I like to do when going through new experiences is to reflect and see what I can improve upon. One of my takeaways here is that I want to improve my English public speaking. I’m currently learning my 4th language (English isn’t my first by the way) and as most multilingual people know, you’re always forced to readjust your brain when switching languages and it causes you to get mixed up. So I feel like it would be a great asset to master public speaking in English so I can be more comfortable in freestyling a presentation under pressure (it’ll also be beneficial for everyday life). Hope I get the time to work on this soon! 😃

Group Photo with the High Commissioner of the UNHCR

Docker - Intro Cheatsheet

Docker is hot 🔥 right now so I just did the introduction to docker course by Andrew Tork Baker on O’Reily Media and I absolutely loved it. I can already see its usefulness and how integrating Docker in my projects will be helpful. Look for a short cheatsheet of Docker commands further down.

What Is It?

Simply put, Docker provides the isolation power of traditional VMs but without the overhead of running separate guest operating systems. Instead, the docker-engine is the containerization engine and allows the individual applications to potentially share resources, like reused libraries, for better efficiency. Super portable & lightweight!

Dockerfiles define what your container will.. well, contain 💡… and what commands it should run on startup. It’s important to always optimize your Dockerfile to reduce its size and reuse layers from other images in your repository. You first specify your base image, install any dependencies, then you finally add a RUN command.

Build the image from your Dockerfile with docker build . from the location of the file (docker will look for a file named ‘Dockerfile’ in this directory). Then use the docker run command to create a container from that image.

I found myself reusing many of the same commands over and over again when playing around with Docker, so for reference here’s a short cheat sheet of useful Docker commands. Hope it helps 😊. (Words between curly braces are variables)


docker run -it {image} : Launch the container in interactive mode (attach to terminal).

docker run -d {image} : Launch the container in detached mode (in the background).

docker run -p 8000:80 {image} : Map port 80 from the container to port 8000 on the host machine.

docker ps : List active containers.

docker ps -a : List all containers (including stopped containers).

docker ps -l : List latest container.

docker start/stop/pause {container} : Self-explanatory.

docker rm {container} : Delete a stopped container.

docker rm -f {container} : Force remove a container (even if its running).

docker logs {container} : Display the container’s logs.

docker logs -f {container} : Follow the logs in real-time as they come.

docker attach {container} : Attach the container to the terminal. This is basically like running docker run -it except on an already running container. But be careful, it will also listen to your input so if you exit attached mode the container will shut down.

docker diff {container} : Displays the diff of files that have been changed since container was created.

docker cp {container}:{container_filepath} {destination} : Copy files from container to host machine.

docker inspect {container} : View all the detailed information about your container.

Note: I’m currently running Docker on my Windows 7 machine (with VirtualBox & no native hypervisor).