While sitting in my house just day dreaming, I was looking for ideas for my next project. Thoughts flew across my mind relating things that I was currently doing, challenges I was currently facing, and technology. Thinking back to my days at GMU, I remembered how much I really liked my Artificial Intelligence class, specifically evolutionary algorithms and genetic programming. I also thought of how hard it was to track macronutrients (something I’m currently doing) whenever I went out to eat at my favorite restaurants. Thinking about different ways to automatically meal plan, I had an idea. What if, given a database of nutrition information from a specific restaurant, I could “evolve” the best meal for myself based on evolutionary strategy algorithms I had learned about in class? With this question in mind, I set out to see whether or not my question was feasible.
Candidate Solutions, and Evaluating Fitness
What would my candidates even look like that I would feed into the algorithm. Can I model meals in a way that I can “breed” or “mutate” them in order to achieve better fitness scores over time? What would my fitness evaluator function look like?
I started on the question of candidates. I downloaded the nutritional spreadsheet from my favorite local sports bar, Beef O’Brady’s, and got to work. My favorite programming language, Julia, has a complex type system. It allows for very complex data to be modeled and represented in a native type. In the end though, my choice of data structure was influenced by the structure of my data. The nutritional spreadsheet that I downloaded from the sports bar had each menu item represented in a row. To read the data, I used the Julia library ExcelReaders.jl, which was able to stuff my data into a Julia DataFrame. I decided that candidate solutions could just be n-row DataFrame objects, a randomly selected subset of size n taken from the entire dataset.
""" Helper function to generate a random candidate from the dataset. n is the size of the candidate. """ function randomCandidate(n::Integer) # Select n random rows from the dataset. rows = [randRow() for i = 1:n] df[rows, :] end """ generateInitialPopulation(lambda::Integer, candidateSize::Integer) From our dataset, generate an array of initial candidates to begin the search. """ function generateInitialPopulation(lambda::Integer, candidateSize::Integer) [randomCandidate(candidateSize) for i = 1:lambda] end
With the design of the candidate solutions in hand, I thought that the easiest method for evaluating the fitness of these solutions would be a simple subtraction. I built my fitness evaluation function to take the absolute value of the difference of my target calories with the total calories of a candidate meal.
""" fitness(candidate::DataFrames.DataFrame) Calculate the fitness of the candidate, which is the absolute value of the difference of TARGETCALORIES and the sum of all calories in the meal. """ function fitness(candidate::DataFrames.DataFrame) abs(TARGETCALORIES - sum(+, candidate[:Calories])) end
With the candidate solutions designed, and the fitness evaluation function now complete, I had to decide on an actual algorithm to grow generations of meals with better and better fitness scores. To find the right algorithm I turned to an old textbook of mine, The Essentials of Metaheuristics. Within this book, Algorithm 18, the (mu, lambda) strategy, seemed perfect .
The (mu/lambda) Evolution Strategy.
Boiled down, this algorithm consists of a few, discrete steps. The first step is to generate an initial population. I do this using the functions I pasted
above to build a population of size lambda. The next step, is to begin evolving new solutions to our existing problem. The evolution process can be broken
down into a few steps as well. The first step is to evaluate the fitness of all members of the population, saving the most fit candidate we saw in the
evaluation process. The next step is to save the
mu candidates who had the highest fitness scores (this is called truncation selection). The final step is
to iterate over each of our
mu parents we selected, and to
lambda/mu times generate new children by mutating copies of the parent. We only exit the
evolution process if the best candidate from the generation is the ideal solution, or we have ran out of time.
My Mutation Algorithm
The mutation algorithm that I designed is very simple, it merely walks each of the items within a meal (rows in a small dataframe), and flips a coin on each one. If the result is heads, it will replace that food item with a randomly selected row from the dataset. If the result is tails, it will move on to the next food item and flip another coin. It repeats this process until it has iterated through all rows in a candidate solution.
""" Our mutator function steps through the parent, and randomly selects the allelles to delete. Will replace the alleles with new alleles (meals). """ function mutate(parent) # Copy the parent so we can do some work. child = deepcopy(parent) toDelete =  for i in 1:size(parent, 1) if rand(Float64) > 0.5 # NOTE: make this tunable push!(toDelete, i) end # If we get tails, delete the row and push a new one to it. end # Delete all rows we don't want at once. DataFrames.deleterows!(child, toDelete) # Add new random rows from the ones we deleted for i in 1:length(toDelete) push!(child, df[randRow(), :]) end child end
Using Algorithm 18 , and all the above functions, I implemented the (mu, lambda) evolutionary meal planning strategy. My candidate solutions were subset DataFrames drawn from the DataFrame of the entire dataset. My mutator function was based on a simple “coin flip” probability, replacing random meal items with other random meal items in hopes of mutating into a higher fitness meal. The results from a few runs are shown below.
I definitely think that even from my small experiment, the results do show that evolutionary strategies are adequate search algorithms in the search space of meal planning. If you can model meals in such a way that they are made up of food items which can be modified or replaced (like alleles), then applying the algorithm is straightforward after that.
The full repository for this code can be found here