Wednesday, December 05, 2012

Laservex Updates

Since the original Laservex launch I have been fixing many bugs and adding features. I recently added the most commonly request feature: dragging the center of a mirror. There are also now over 50 levels (mostly thanks to dllu and GuiltyBystander). So, if you haven't tried Laservex since the initial launch, I encourage you to try it again.

The site had a nice initial traffic spike from Reddit and Hacker News but traffic has been dropping recently. I regret launching/announcing the site so early in development, since during the traffic spike the game was buggier and harder to interact with (which probably lost many potential users).

Saturday, November 24, 2012

Laservex Released!

I have finished implementing the game I have been working on for the last few weeks. Try it out at www.laservex.com.

Thursday, November 15, 2012

Saturday, October 20, 2012

laservex

I have started working on a new project: www.laservex.com. This will be a remake of a Java game I made a few years ago: http://byronknoll.com/lasers.html. This time I will implement it in HTML5 and support user-generated levels.

Wednesday, October 10, 2012

The Top-k Element Problem

I have been working on the problem of efficiently finding the most common items in a large stream of data. This problem shows up in many applications - I am working on it for a data compression project. One example application: given a stream of user queries to a search engine, find the top k queries which occur most often. Some Google employees published a fascinating algorithm for this problem called count sketch.

After reading through papers on the problem, I decided to use an algorithm called space-saving. I implemented it in C++ and have been testing it on the following task:

Problem: find the k most common substrings of length N in the Wikipedia database.

I haven't fully optimized my program yet, so I estimate it would currently take 4-5 days to process the 38GiB Wikipedia file I downloaded. I have some preliminary results on a subset of Wikipedia though - here is the output for k=1000 and N=1...30: Each row has two columns (tab separated). The first column is the substring and the second column is the count.

Since the time complexity of this algorithm is completely independent of k, I actually chose k=1 million and then took the top 1000 from the million buckets. The heuristic I used for choosing k was to use the largest value possible such that the memory used by the program did not exceed the available RAM on my computer.

Tuesday, October 02, 2012

Five Player Go First Dice

In a previous post I proposed a challenge to come up with a set of go first dice for five players. I thought my prize would be safe since this seems like a difficult unsolved problem. Paul Vanetti has successfully won the challenge with the following solution:
  • 0, 1, 2, 3, 4, 5, 6, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 259, 260, 261, 262, 263, 264, 265
  • 7, 20, 21, 22, 35, 36, 37, 50, 83, 96, 97, 98, 111, 112, 113, 126, 139, 152, 153, 154, 167, 168, 169, 182, 215, 228, 229, 230, 243, 244, 245, 258
  • 8, 12, 13, 14, 15, 19, 23, 27, 28, 29, 30, 34, 38, 42, 43, 44, 45, 49, 84, 88, 89, 90, 91, 95, 99, 103, 104, 105, 106, 110, 114, 118, 119, 120, 121, 125, 140, 144, 145, 146, 147, 151, 155, 159, 160, 161, 162, 166, 170, 174, 175, 176, 177, 181, 216, 220, 221, 222, 223, 227, 231, 235, 236, 237, 238, 242, 246, 250, 251, 252, 253, 257
  • 9, 11, 16, 18, 24, 26, 31, 33, 39, 41, 46, 48, 85, 87, 92, 94, 100, 102, 107, 109, 115, 117, 122, 124, 141, 143, 148, 150, 156, 158, 163, 165, 171, 173, 178, 180, 217, 219, 224, 226, 232, 234, 239, 241, 247, 249, 254, 256
  • 10, 17, 25, 32, 40, 47, 86, 93, 101, 108, 116, 123, 142, 149, 157, 164, 172, 179, 218, 225, 233, 240, 248, 255
These dice have the following number of faces:
  • 90
  • 32
  • 72
  • 48
  • 24
When I described the problem in my original blog post I did not specify any constraint specifying that the dice need to have the same number of faces. This constraint is unnecessary because:

Conjecture: Given any valid set of N go first dice with number of faces x1, x2, ..., xN, it is always possible to generate a new set of go first dice with exactly LCM(x1, x2, ..., xN) faces per die.

This can be done using the following algorithm:

Suppose you have go first dice with the following labels:
  • a1, a2, ..., aQ
  • b1, b2, ..., bR
  • ...
  • y1, y2, ..., yZ
Let m = LCM(Q, R, ..., Z). The following should be a set of go first dice with m faces:
  • m*a1, m*a1 + 1, ..., m*a1 + (m/Q) - 1, m*a2, m*a2 + 1, ..., m*a2 + (m/Q) - 1, m*aQ, m*aQ + 1, ..., m*aQ + (m/Q) - 1
  • m*b1, m*b1 + 1, ..., m*b1 + (m/R) - 1, m*b2, m*b2 + 1, ..., m*b2 + (m/R) - 1, m*bR, m*bR + 1, ..., m*bR + (m/R) - 1
  • ...
  • m*y1, m*y1 + 1, ..., m*y1 + (m/Z) - 1, m*y2, m*y2 + 1, ..., m*y2 + (m/Z) - 1, m*yZ, m*yZ + 1, ..., m*yZ + (m/Z) - 1
Example of how to apply the algorithm to a simple set of dice (which are not go first dice):
  • 1,2,4
  • 3,5
LCM(3,2) = 6, so the merged dice are:
  • 6*1, 6*1 + 1, 6*2, 6*2 + 1, 6*4, 6*4 + 1
  • 6*3, 6*3 + 1, 6*3 + 2, 6*5, 6*5 + 1, 6*5 + 2
Which becomes:
  • 6, 7, 12, 13, 24, 25
  • 18, 19, 20, 30, 31, 32
Empty regions can be removed to form:
  • 1, 2, 3, 4, 8, 9
  • 5, 6, 7, 10, 11, 12
Since LCM(90, 32, 72, 48, 24) = 1440, Paul's solution can be used to generate a set of 5 go first dice with 1440 faces. Paul says he found a technique to use a solution for N-1 to generate N go first dice (I do not know how this is done). His technique can be used to quickly generate solutions for any N. The solutions generated by his algorithm are not optimal in terms of minimizing the number of faces on the dice.

Tuesday, September 25, 2012

Go First Dice Problem

The Problem:

I encountered an interesting riddle recently called the "go first dice problem". The challenge is to come up with a set of dice (one per player) such that:
  • No two faces share the same label.
  • The dice can be used to determine a fair turn order for the players after a single roll. That is, the player with the highest roll goes first, the second highest goes second, etc. A set of dice is fair if and only if every possible turn ordering is equally likely given random rolls.

More Information:

There are some stack exchange threads discussing the problem: Eric Harshbarger (puzzle designer/Lego sculptor/programmer/mathematician/Scrabble player) sells a set of 12 sided dice that solve the go first problem for four players: http://www.ericharshbarger.org/dice/#gofirst_4d12. As far as I know there is no known solution for five players. For fewer than five players:
  • 2 players: minimum 2 faces
  • 3 players: minimum 6 faces
  • 4 players: minimum 12 faces

The Challenge:

Find a set of go first dice for five players. This will require at least 30 faces per die.

I have been working on trying to solve this (with a computer) for a few days. The search space is too large for a naïve brute-force search, so I think to make this computationally feasible you need to either:
  • find a clever way to constrain the solution space (to make it small enough to exhaustively search), or
  • come up with a heuristic so that your program is more likely to search in regions where a solution exists.

The Prize:

I will offer a small prize to whoever solves this challenge first. The prize will be a set of Eric's go first dice for four players - shipping paid by me to anywhere in the world. To claim the prize just email me with the solution and I will run a program to validate it. You can only claim the prize if you are actually the person who found the solution (e.g. don't copy it from someone). There is no set end date to this offer (as long as Eric continues selling the dice).

Sunday, September 09, 2012

Metamoku

The first release of Metamoku is ready! Try it out at: www.metamoku.com

Metamoku should work across different browsers, operating systems, and devices (including smart phones and tablets). Let me know if you encounter problems on any platform. Note that you can also play team games if multiple people choose the same color. When I was developing the game I was unsure whether it would actually be fun or contain non-trivial strategies. After playing some test matches with friends I think the game does have some interesting strategic components (especially the team matches). I currently don't have any major features/tasks left to implement. I will probably continue tweaking gameplay parameters as I get more experience playing games.

Saturday, September 08, 2012

Dark Souls PvP

I recently started my second playthrough of Dark Souls. Dark Souls is the best game I have ever played. I originally played Dark Souls on the PS3 - this time I am playing the PC port ("Prepare to Die" edition). That means now I can take screenshots!
Notice that the dragon's tail is missing - I did that.

Unhappy dragon.

This hallway is where I had an unforgettable experience. Another player had invaded my world to assassinate me (1v1 PvP). This is a pretty rare event when I play, so it is exciting when it happens. Luckily, this was the perfect place and time for him to invade - I had full health and the location could not be better. This narrow hallway leads to a dead-end, which means there is only one way in or out. That prevents my opponent from sneaking up with a surprise attack. My character is a pyromancer, which means that I am most effective at mid-range attacks. This hallway was perfect for my character because it forces the other player to walk directly into my spell range. So I waited in the hallway for him to attack. He spotted me from the end of the hallway and tried to lure me out - he can see my Pyromancy Flame so he knows that I have the advantage here. His weapon is some sort of dagger. Since I refuse to leave the hallway, eventually he cautiously approaches. When it looks like he is in my spell range, I start casting fire orbs at him. Impressively, he manages to stay far enough back to narrowly avoid my spells. When I advance he moves back, and when I move back he advances. So I continue casting the spell since I have plenty to spare. My spell is powerful enough that a single hit would make it very difficult for him to recover. Suddenly after one of my fire orbs misses him, he starts sprinting towards me. I cast another orb and it is headed directly at him. At the perfect time he does a dive roll under the orb and it narrowly misses him. He rolls past my side and stands up directly behind me. Before I have a chance to react he backstabs me and kills me with a single attack. The backstab attack is that powerful because it is so difficult to get directly behind a player. I was astounded at how perfect his sequence of moves was. As he was fading away to return to his world, he turned towards my corpse and bowed. I wish the battle had been recorded - it would have been fun to be able to watch it again.

Sunday, August 19, 2012

Metamoku

I have started working on a new game! HTML5 canvas! Network multiplayer! Turn-based strategy! Metaballs!

The name of the game will be Metamoku. A combination of metaball and moku (which is territory in Go).

I have been putting together various demos to experiment with rendering metaballs. It is actually a difficult problem to render them fast+smooth+accurate. I have tried three approaches:

Approach Speed Smoothness Accuracy
pair-wise Bézier curves Fast Good Bad
pixel rendering Variable Bad Good
border following Slow Good Good

Pixel rendering is the most common rendering approach. However, it can get very slow on large displays. Also, making the edges smooth is difficult. dllu made a two color demo.

Here are a couple demos I made with the pair-wise approach. These are just an approximation to the actual metaball function and are therefore not "accurate".

I found a great article about the border following approach here. This is the most complex of the three approaches. I implemented a demo here. Here is another demo where you can see some of the path-finding internals (red is border misses, blue is border hits). These demos don't have real-time rendering or drag-and-drop because it is too slow. However, this approach might be feasible for a turn-based game with one frame per turn.

Saturday, July 21, 2012

Password Management

The Problem:

  • Using the same password on multiple websites is known to be insecure.
  • Creating and remembering a different password for every website is difficult.

Solution:

My solution requires remembering two pieces of information:
  1. A single secure key.
  2. A simple string hashing function.
When registering for a website, generate your password by applying your hashing function to the domain name of the website (ignoring the top level domain) and append the output of that to your secure key. Voilà, you now only need to remember O(1) bits of information instead of O(N) to register N websites.

Example:

Step 1: generate and remember a single secure key. Let's say my key is now "0vDga5"
Step 2: generate and remember a string hashing function. Here is an example of a function to create a three character hash:
  • Character 1: The first letter of the string.
  • Character 2: The length of the string.
  • Character 3: The last letter of the string.
Now when registering an account on the following websites, my password would be:

gmail.com: 0vDga5g5l
imdb.com: 0vDga5i4b
a.ca: 0vDga5a1a
subdomain.domain.xyz: 0vDga5d6n

Thursday, July 19, 2012

Lossy Video Compression



Using a three dimensional DCT I can drop components from both the temporal and spatial dimensions. Here is what dropping high frequency temporal components looks like:

Sunday, July 15, 2012

Lossy Audio Compression

Here is an experiment I ran performing lossy audio compression with DCT:


[link]

Audio Spectrogram

Using FFTW I made a program to visualize audio spectrograms. The song "Windowlicker" contains an image hidden in its spectrogram - here is the output from my program:

Discrete Cosine Transform

Today I spent a few hours playing with discrete cosine transforms using the FFTW library. I have been experimenting using DCT for lossy compression in different domains. Here is a visualization I made of lossy image compression using an increasing number of frequency components:


[link]

Saturday, May 05, 2012

Elian Ambigram

Today I attempted to find an Elian ambigram. I wrote a small program to do a brute-force search through dictionary words to help. Out of all possible English words/phrases, it turns out only a handful of combinations are valid. The reason they are so rare is because Elian script has high rotational symmetry. The rotational symmetry adds constraints on how the ambigram can be constructed - standard Latin characters have far fewer constraints.

Sunday, March 25, 2012

symbols

This is a follow-up on my last post. Here are some comparisons of two symbols generated from the same class and displayed at the same orientation. I can adjust parameters on how much variance there is between symbols within a class - the parameters I chose are fairly conservative so that most symbols should appear very similar:

A New Kind of CAPTCHA

My goal for today was to create my own CAPTCHA system. I find CAPTCHAs fascinating because of their goal of distinguishing human intelligence from machines. Surprisingly, most major CAPTCHA systems (currently used by companies such as Google, Microsoft, Facebook, etc.) are routinely solved by computers with over 15% success rates (at least according to the brief research I did on the topic today).

Most CAPTCHA systems provide an image of a sequence of characters and ask the user to type in the text. Software that attempts to solve these typically first segment the image into characters and then classify each character individually. Character classification is a solved problem - so the only security behind current CAPTCHAs relies on the fact that character segmentation is difficult. In my attempt at creating a CAPTCHA system I will try to make both the segmentation and classification tasks as difficult as possible.

One of the reasons English letter classification is so easy is because there is a massive amount of training data available for each letter. In my CAPTCHA system, I will create my own symbols and provide only *one* training example for each symbol class. My hypothesis is that humans are better than computers at extrapolating from tiny training sets. Another benefit of creating new symbols from scratch is that it makes the system internationalizable (i.e. not dependent on the character set of a particular language).

Here is an example output from my CAPTCHA program:

The answer in this case is "1428736". First, I create 10 symbol classes and present one example from each class to the user. I then generate 7 symbols from random classes (no repetition) and arrange them from left to right (allowing overlaps). The user has to enter the symbol classes in the right order. There is a 1 in 604,800 chance of guessing correct randomly, or a 1 in 10,000,000 chance if symbol repetition is allowed. The symbols are displayed in random orientation.

The symbol from a particular class will look different between the training example and the main CAPTCHA. Each class is defined by a small set random parameters. These parameters determine features such as branching factor, curviness, length, etc. Once I have generated the random class parameters, each symbol is also created via a stochastic process - but its general appearance will remain visually similar to the other symbols in its class due to the class parameters.

Well, I suspect that this CAPTCHA system will not work well in practice because it is too tedious for humans to solve. On first impression I think this task seems more difficult to solve with a computer than typical CAPTCHA systems. It would be easy to evaluate the time/precision of humans solving these CAPTCHAs with a user-study. However, evaluating the performance of software solvers is more difficult. One idea I had for evaluating software solvers is to host a contest (say on TopCoder or Kaggle) and offer a big cash prize as incentive to make competitive AIs (I won't actually do this :P).

Here are a few more random CAPTCHAs from the program. I have posted the answers at the end of this post - how many can you solve without looking at the answers?

The answers are: 0718936, 9467251, 0197283, and 7634290 respectively.

Sunday, March 11, 2012

An API for Intelligence

Over the last few years I have made many attempts at creating artificial intelligence. By "artificial intelligence" I mean a general purpose system that can recognise and predict patterns in spatio-temporal data. I have written about this topic in some previous posts.

Spatio-temporal data is any data that has a temporal dimension and one or more spatial dimensions. Everything your brain perceives is a stream of spatial data over time. Any type of sensor (e.g. microphone, camera, thermometer) can create a stream of spatial data over time. The goal of artificial intelligence/machine learning/data compression is to look for patterns in this type of data and predict what the data will be in the future.

All of my AI projects have had essentially the same API. For those of you who speak Java: "double[][] perceive (double[][] inputs)". That is, a single function that takes a matrix of floating point numbers and returns another matrix of floating point numbers. The input represents spatial data at a particular moment in time and the return value from the function represents a prediction for the next input (the next time step).

A magic black box:

Let us imagine that I have a magic black box that does a great job at implementing this function. What could I do with it? Well the most obvious thing I can do is use it to predict the future. Let's use it to find out what the stock prices will be five years from now:

// Training.
double[][] stockPrices;
for (Time t = TimeOfFirstData(); t < Now(); ++t) {
  stockPrices = GetHistoricalStock(t);
  stockPrices = blackBox.perceive(data);
}
// Predict the future.
for (Time t = Now(); t <= FiveYearsFromNow(); ++t) {
  stockPrices = blackBox.perceive(stockPrices);
}
return stockPrices;


Of course, these estimates wouldn't be very accurate because in reality stock prices depend on a vast number of different types of data (which I didn't give to the black box as input). If I gave the black box additional data in the input matrix (such as local news stories, earning reports, weather sensors, etc.) it would do a better job at predicting stock prices.

What else could I do with the black box? If I want to have a conversation with it or make it control a robot, I would need to give it some mechanism to perform actions. Let's imagine I naively plug a light bulb into a random cell in the output matrix of the black box (and the light bulb turns on/off depending on the value of that prediction). This light bulb is now an actuator - it gives the black box a mechanism to interact with the environment. There is now a feedback loop where the black box can influence the value of future sensor readings by changing the light bulb prediction. How would the black box choose which action to take? Well, the only "goal" of the box is to minimise the difference between its predictions and what actually happens in the future. Maybe turning on the light bulb allows the black box to make more accurate predictions of future sensor values, so it "decides" to keep the light on. If I hook up a speaker and microphone to the black box, maybe it will decide that having a conversation with me will also allow it to make more accurate predictions about the future (which is probably true).

The Magic Number:

Hopefully in the previous section I convinced you that if this one function was implemented correctly, it would result in something most people would consider true AI. So, why did I choose a matrix of floating point numbers? Why not simplify the API by just making it a single array instead? Why not a higher dimensional matrix? The answer is because I think that two (as in a two-dimensional matrix) is the magic number that makes a reasonable trade-off between competing factors. Using a higher dimensional space would make implementing the function infeasible due to computational complexity. Using a lower dimensional space loses critical information about the spatial relationships between sensors. As evidence of this information loss, imagine converting a two-dimensional image into a one-dimensional array of pixels. The image would become meaningless to you because you have lost the information of how the pixels are spatially arranged.

Let us consider the most intelligent system we know of today: the human brain. I think that the human brain is complying to the same API I described above. In fact, we only need to look at a part of the brain known as the neocortex. The neocortex is a thin sheet of neurons on the outer surface of the brain. It is responsible for essentially all higher-level thought and what makes humans intelligent. Since the neocortex is fundamentally a two-dimensional surface, it uses topographic maps to project higher-dimensional signals onto a two-dimensional space. For example, the three-dimensional touch sensors on your body are mapped onto a two-dimensional homunculus on your neocortex, where regions that are neighbouring in 3D space are also neighbouring on the homunculus (and regions which are more important are mapped to larger regions on the homunculus).

So, how close are we to being able to implement AI? I think so far the most successful efforts come from the field of data compression. A compression algorithm called PAQ8 does an amazing job of implementing "boolean perceive (boolean input)". However, it doesn't scale well to continuous numbers or higher-dimensional spaces. Another promising attempt at creating AI is coming from Numenta in the form of hierarchical temporal memory. So far their attempts have been unsuccessful, but I think at a higher level their approach to the problem is the best I have seen.

Saturday, February 25, 2012

Project Euler

Over the last few months I have been solving programming problems on Project Euler. I currently have solved 140 out of 373 problems. Since I have been solving the problems in order of increasing difficulty, my progress will probably slow down now that I have finished solving all of the easy problems. There is a great range in the difficulty of the problems - there are probably some problems that I couldn't solve even if I spent weeks working on them. If you have a Project Euler account, feel free to add me as a friend: 13166529170291_2172f964a0c5e4a2e5a4d6e690c97cb8

Game Playtime

Today I beat The Legend of Zelda: Skyward Sword. It was an amazing game - better than The Legend of Zelda: Ocarina of Time in my opinion. I noticed at the end of the game that my total playtime was recorded as 41 hours. Out of curiosity, I have gone through some of my save files from other games and compiled a list of how long it took me to beat each game. Unfortunately the game that I probably spent the most time playing, Baldur's Gate II, doesn't record total playtime.

Title Playtime (rounded to the nearest hour)
Dark Souls 57
The Legend of Zelda: Skyward Sword 41
Demon's Souls 40
Valkyria Chronicles 24
The Elder Scrolls V: Skyrim 21
Red Dead Redemption 15
Mass Effect 2 15
Heavy Rain 14
Dead Space 11
Uncharted 2: Among Thieves 11
Super Smash Bros. Brawl 10
ICO 7
The Secret of Monkey Island: Special Edition 5

See this post for a list of my favorite games (four years ago).