Jared Sylvester

Compute what is computable and make computable what is not so.
Department of Computer Science
University of Maryland
A.V. Williams Building, Rm 3136
College Park, MD 20742

jsylvest /at/ umd /dot/ edu
@jsylvest

Find me on Mendeley or Academia.edu.

News

Sept '13:  I've finally got the code, samples and documentation online for TwiGVis, my Twitter mapping visualization. Check out some of the sample videos on that page, or through my Vimeo account.

June '13:  I just got the news that both of my submissions to IACAP were accepted. The troubling news is that IACAP stands for "International Association for Computing and Philosophy." Actually that's not troubling at all. What is is that I'm no philosopher. Looks like I'm going to have to dust off my old epistemology books and do some brushing up! [Update: They're a little slow getting the proceedings online, but I've posted both papers below, as well as the slides for me talk.]

May '13:  UMD's Center for Complexity in Business is looking to hire a postdoc and a couple of undergrad researchers. We hire from a wide range of backgrounds: right now we've got CS, Physics, Applied Math, Econ, and Marketing. Please forward the openings to anyone you know who might be interested; the sooner these positions get filled the sooner I can stop juggling seventeen dozen different projects ;)

April '13:  My Neural Networks paper on autonomous learning has finally been released. Out of respect for the editors I won't mention how long it's taken from planting to harvest ;) You can grab a copy below or on Science Direct.

March '13:  I've started blogging. (Well, blogging under my own name. I've been doing the incognito thing for a long while.) I'm thinking the theme will mostly be cross-over between CS and business, but I'll also post some things like book reviews and other overflow content that doesn't really fit here. Check it out.

February '13:  I'm giving a presentation to the Marketing Dept's biweekly "Quant Lunch" on Monday 25 February in order to solicit some feedback on the modeling & prediction project I've been working on for Smart Bomb Interactive. If you're interested in social networks, business, and machine learning stop by Smith 2509 at 12:00.

September '12:  Want to help with my research? In order to compare one of my neural network systems against human performance on a memory game, I need actual human data. That's where you come in. You can go to this website I created to simultaneously have nostalgic fun playing a memory game you probably haven't played since childhood while also helping my research. Come on, do it for Science.

August '12:  I've been teaching myself jQuery over the last couple of weeks, and have added some small, dynamic gimcrackery to this site. I've only tested them on Chrome, Firefox and Safari on my Mac, and only thoroughly on the foremost of those, so if there's some weird stuff going on for you... sorry. I spent enough hours back in '98–'02 doing cross-browser compatibility testing to last me a lifetime, and I'm in no mood to go find a Windows box to try out my little gewgaws in IE.

(PS: Where was jQuery a decade ago? I can't begin to describe how much hair-pulling and teeth-gnashing it would have saved me back then.)

(PPS: Speaking of cross-browser testing, I have no idea how the typography looks on Windows machines. I've spent a bit of time getting it looking the way I want in Chrome/Mac, but I make no promises about how other machines will render things, especially w.r.t. to letter weights.)

July '12:  It looks like I'm going to be charting a new course. Due to some budgetary circumstances beyond our control CASL has shut down their computational modeling efforts, which has ended my funding. Less than a week after I got that bad news though, I managed to find an RA position with Bill Rand, Director of the Center for Complexity in Business at UMD's Smith School of Business. I'll be working on social media modeling with Bill, along with professors Louiqa Raschid (Information Systems) and Yogesh Joshi (Marketing). This is definitely an unscheduled detour for me, but it's an exciting one.

May '12:  We have a puppy! Everyone, this is Bonnie, our new three-month-old Westie.

April '12:  Everyone is invited to the first annual UMD AI Day on 9 April. I'll have a poster on display about my current work. When I get a decision back from the editor regarding the journal paper it's based on I'll make a PDF of the poster available here. In the meantime, here's the slide I used for my 90 second spotlight / elevator pitch.

Jan '12:  My Erdős Number is five! (Or no more than five, really. It's tough to tell the exact number since many of my papers, and those of my collaborators, are not in the appropriate databases.) Okay, so plenty of people have Erdős numbers of five. It's not exactly exalted company. (In fact, it's the median.) But it is finite and positive, so I'll take it. Some guy in Ann Arbor once tried to auction off a five. Besides, John Nash and Stephen Hawking have Erdős numbers of four, and Paul Samuelson, Neils Bohr, Francis Crick and Alan Turing have fives, so it's not like I'm in the outer fringes of the Erdősphere.

Nov '11:  I posted my BICA 2011 slides below.

[ show old news ]

About Me

I'm a doctoral candidate at the University of Maryland, College Park, in the Department of Computer Science. My main research interest is in biologically-inspired computing, primarily neural networks at this point, though I've done some evolutionary computation in the past as well. More specifically, my dissertation work is on executive function and cognitive control (working memory, decision making, etc.) using neural-inspired systems rather than rule-based ones.

I was graduated in 2006 from the Computer Science and Engineering department at Notre Dame, where I lived in Zahm House.

Outside of academics, I'm interested in economics (especially the George Mason variety) and political philosophy. Film and art — especially sculture, animation and algorithmic art — are also big interests. Recently I've also taken up an interest in baking bread. Our new Westie puppy also takes up a bit of my time.

C.V. / Résumé

Research

I am currently working with Jim Reggia on exploring neural models of cognitive control. Most cognitive control models are built using symbolic, rule-based paradigms. Such systems are both biologically implausible and often tend towards the homuncular. What neural models do exist are typically very narrowly designed for a particular task and require a great deal of human intervention to tailor them to the objective as well as exhibiting problems scaling to larger problem spaces.

I am exploring creating a more generalizable model of cognitive control using a neural paradigm by creating networks which learn not only memories of environmental stimuli but also the steps necessary for completing the task. The steps are stored in a memory formed by a sequential attractor network I developed so that they can be visited in order. I call my model GALIS, for "Gated Attractor Learning Instruction Sequences."

By generating behavior from the learned contents of a memory rather than the explicit structure of the network itself I believe it will be much easier for the model's behavior to change. Rather than having to rebuild the "hardware" of the network, you can instead load different "software" by training the memory on different patterns. Furthermore, making the model's behavior readily mutable opens the door to it improving its performance as it gains experience. That, in turn, should allow the model to learn the behavior necessary to completing a task on its own.

Basing behavior on memory contents rather than architecture is not unlike the shift from clockwork automata like Vaucanson's "Digesting Duck" to the Jacquard Loom. The latter was an important step in the history of computation because its behavior could be changed simply by swapping in a different set of punchcards — i.e., by changing the contents of its memory. Of course GALIS surpasses the Jacquard loom because the loom was only able to follow instructions, not conduct any computation of its own. GALIS, on the other hand, determines endogenously when and how to modify its working memory, produce outputs, etc.


In addition to my dissertation research I'm also working with Bill Rand in UMD's Smith School of Business's Center for Complexity in Business. I'm working on a couple of projects, but the main one for me is an effort to model social interactions in an MMORPG with a freemium business model. Our goal is to be able to model who will convert from free to a paid user based on there location in the in-game social graph and the characteristics of them and their friends. We're using a variety of techniques, including agent-based modeling, logistic regressions and assorted machine learning techniques.


Prior to GALIS I worked with Jim on two other projects. The first is a computational model of working memory formation. This is being done in conjunction with a wide-ranging study at UMD's Center for Advanced Study of Language into the role of working memory in language tasks. This study of working memory lead into the cognitive control research I am doing now. I have also used machine learning methods to analyze the results of some CASL studies to see if it is possible to determine who will benefit from working memory training based on pre-test results. Please see the 2011 tech report below for more.

The second project, begun in Spring 2007, deals with symmetries in topographic Self-Organizing Maps. By limiting the radius of competition and choosing multiple winners for standard Hebbian learning we can generate cortices with global patterns of symmetric maps. Please see the 2009 Neural Computation paper below for details.


As an undergrad I did Machine Learning research. I worked on creating and testing a system called EVEN, for "Evolutionary Ensembles." It is a genetic algorithm framework for combining multiple classifiers for machine learning and data mining. It is very flexible, with the ability to combine any type of base classifiers using different fitness metrics. This work was done with Nitesh Chawla, who advised me for my final two years at Notre Dame.

Publications

Journals
Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Journal of Consciousness Studies (Forthcoming). 2014. pdf ]
Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Understanding the Predictive Power of Computational Mechanics and Echo State Networks in Social Media." ASE Human Journal, vol. 2(2), pp.13–25. 2013. link, pdf, BibTeX, arXiv, SSRN ]
Sylvester, J., Reggia, J., Weems, S., and Bunting, M. "Controlling Working Memory with Learned Instructions." Neural Networks, vol. 41, Special Issue on Autonomous Learning, pp. 23–38. 2013. linkpdfBibTeX ]
Sylvester, J., and Reggia, J. "Plasticity-Induced Symmetry Relationships Between Adjacent Self-Organizing Topographic Maps." Neural Computation, vol. 21(12), pp. 3429–3443. 2009. linkpdfBibTeX ]
Conferences
Sylvester, J., Healy, J., Wang, C., & Rand, W. "Space, Time, and Hurricanes: Investigating the Spatiotemporal Relationship among Social Media Use, Donations, and Disasters." ASE Int'l Conf. on Social Computing. (Forthcoming). May, 2014.
Rand, W., Darmon, D., Sylvester, J., & Girvan, M. "Will My Followers Tweet? Predicting Twitter Engagement using Machine Learning." European Marketing Academy Conference (Forthcoming). June, 2014.
Sylvester, J., & Rand, W. "Keeping Up with the (Pre-Teen) Joneses: The Effect of Friendship on Freemium Conversion." Proc. of the Winter Conference on Business Intelligence. February, 2014. pdf ]
Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." ASE/IEEE Int'l Conf. on Social Computing, pp. 102–107. September, 2013. pdf ]   (This work was also accepted for presentation at the Complexity in Business Conference and a AAAI Fall Symposium. See the talks section below for slides.)
Sylvester, J., & Reggia, J. "The Neural Executive: Can Gated Attractor Networks Account for Cognitive Control?" Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013. link, pdfslides ]
Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013. link, pdf ]
Sylvester, J., Reggia, J., & Weems, S. "Cognitive Control as a Gated Cortical Net." Proc. of the Int'l Conf. on Biologically Inspired Cognitive Architectures, pp. 371–376. Alexandria, VA, August 2011. pdfBibTeXslides ]
Sylvester, J., Reggia, J., Weems, S., & Bunting, M. "A Temporally Asymmetric Hebbian Network for Sequential Working Memory." Proc. of the Int'l Conf. on Cognitive Modeling, pp. 241–246. Philadelphia, PA, August 2010. pdfBibTeX ]
Reggia, J., Sylvester, J., Weems, S., & Bunting, M. "A Simple Oscillatory Short-term Memory Model." Proc. of the Biologically-Inspired Cognitive Architecture Symposium, AAAI Fall Symposium Series, pp. 103–108. Arlington, VA, 2009. pdfBibTeX ]
Sylvester, J., Weems, S., Reggia, J., Bunting, M., & Harbison, I. "Modeling Interactions Between Interference and Decay During the Serial Recall of Temporal Sequences." Proc. of the Psychonomic Society Annual Meeting, November 2009. pdfBibTeX ]
Chawla, N., & Sylvester, J. "Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets." Proc. of Multiple Classifier Systems, pp. 397–406. 2007. pdfBibTeX ]
Sylvester, J., & Chawla, N. "Evolutionary Ensemble Creation and Thinning." Proc. of IEEE IJCNN/WCCI, pp. 5148–55. 2006. pdf, BibTeX ]
Sylvester, J., & Chawla, N. "Evolutionary Ensembles: Combining Learning Agents using Genetic Algorithms." Proc. of AAAI Workshop on Multi-agent Systems, pp. 46–51. 2005. pdfBibTeX ]
Reports, working papers, etc.
Sylvester, J., Reggia, J., & Weems, S. "Predicting improvement on working memory tasks with machine learning techniques." UMD Center for Adv. Study of Languages. Technical Report. 2011. pdf ]
Sylvester, J. "Maximizing Diffusion on Dynamic Social Networks." 2009. pdf ]
Submitted to satisfy the requirements for my Master's in CS. Originally written as a final project report for BMGT 808L (Complex Systems in Business). Currently being reworked for a journal submission.
Other Talks
"Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." Invited. AAAI Fall Symposium on Social Networks and Social Contagion. Alexandria, VA. 15–16 November 2013. pdf ]   (This is a solo, abbreviated version of the talk I gave with Dave Darmon at CCB sub.)
"Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." With David Darmon. Refereed. 5th Ann. Complexity in Business Conference. Washington, DC. 7–8 November 2013. pdf ]
"Neurocognitive Architecture Case Study: GALIS." An informal guest lecture in CMSC 727 (Neural Computation). May 2013. pdf ]
"Attractor Network Models for Cognitive Control." Given for CASL's Lunch Lecture series. College Park, MD. 13 March 2012.
"Modeling Cognitive Control of Working Memory as a Gated Cortical Network." Invited talk at the First Int'l Workshop on Cognitive and Working Memory Training. Introduction by Jim Reggia. Hyattsville, MD. 23–25 August 2011. pdf ]A chapter based on this material is in preparation.
"Oscillatory Neural Network Models of Sequential Short-Term Memory." Given for CASL's Lunch Lecture series. Introduction by Scott Weems. College Park, MD. 15 June 2010. pdf ]

The Jared Watch

This is a sample of some of the media I've been enjoying when I'm not in the lab. I think of it as an annotated bibliography of my cultural consumption.

Click the triangles to show some brief comments for each entry, or click here to show all of them. You can find older entries on my Archives page.

Extracurricular Projects

Here are some projects — small & large, scientific & artistic — that I've fooled around with in my spare time. I hope you find them to be some combination of useful, interesting, and diverting.

False Leveler

Artificial histogram matching

False Leveler is a Processing program I created to do histogram matching with random, artificially-created destination histograms. Rather than repeating what I wrote on the project's page, I think it would be a good idea to show you some examples.

lldist.rb

Calculate distance between lat/lon pairs

This is a Ruby function to find the distance between two points given their latitude and longitude. Latitude is given in degrees north of the equator (use negatives for the Southern Hemisphere) and longitude is given in degrees east of the Prime Meridian (optionally use negatives for the Western Hemisphere).

include Math
DEG2RAD = PI/180.0
def lldist(lat1, lon1, lat2, lon2)
  rho = 3960.0
  theta1 = lon1*DEG2RAD
  phi1 = (90.0-lat1)*DEG2RAD
  theta2 = lon2*DEG2RAD
  phi2 = (90.0-lat2)*DEG2RAD
  psi = acos(sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2))
  return psi*rho
end

A couple of notes:

  1. This returns the distance in miles. If you want some other unit, redefine rho with the appropriate value for the radius of the earth in your desired unit (6371 km, 1137 leagues, 4304730 passus, or what have you).
  2. This assumes the Earth is spherical, which is a decent first approximation, but is still just that: a first approximation.*

I am currently writing a second version to account for the difference between geographic and geocentric latitude which should do a good job of accounting for the Earth's eccentricity. The math is not hard, but finding ground truth to validate my results against is, since the online calculators I've tried to check against do not make their assumptions clear. I did find a promising suite of tools for pilots, and I'd hope if you're doing something as fraught with consequences as flying that you've accounted for these sorts of things.


Update: I've been putting this to use in a new project, and I've noticed an edge case that didn't crop up before. Due to some floating-point oddities, trying to get the distance between a point and itself will throw an error. In that case the value passed to acos() should be 1.0 but ends up being 1.0000000000000002 on my system. Since the domain of acos() is [-1,1] this is no good.

If you want to be on the safe side you can replace this:

psi = acos(sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2))

with this:

val = sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2)
val = [-1.0, val].max
val = [ val, 1.0].min
psi = acos(val)

and that will take care of things. (Yes, this is a verbose way of doing things, but I often prefer the verbose-and-overt to the laconic-and-recondite.)

Roy's Mirror –or– Through a Glass, Dithered

This is a simple animation I whipped up in Processing based on Roy Lichtenstein's series of mirror paintings.

Roy Lichtenstein, "Mirror #1," 1969.
Roy Lichtenstein, "Self Portrait," 1978.

This page has a little more background, the complete code, and an version which will run in your browser using your webcam. (For certain browsers, anyway.) If the animation doesn't work in your browser, here's how it looks:

TwiGVis: Twitter Mapping

TwiGVis (TWItter Geography VISualizer) is a program I created to visualize all of the tweets my research group had collected durring Hurricanes Sandy and Irene. It will give you both still images, like below, and video output.

Later I modified it to also display data from the Red Cross on their mobile donations campaign.

Though I initially created it only for internal use in my lab, we received such good feedback at some talks that I've posted the code online. You can download the code and sample data here. Use and modify it as you wish, according to a GPL license. Read more about the project and see more examples here. I hope some of you find this useful.

Happy mapping!

gifcrop.sh

Recently I had to crop a lot of animated gifs down for a project. This isn't hard to do with ImageMagick

$ convert in.gif -coalesce -repage 0x0 -crop WxH+X+y +repage out.gif

…but it does require some repetive typing and mental arithmetic and rather mysterious incantations if you don't grok what all the coalescing and repaging is about. (I don't.) So I put together this bash script to handle that for me. Didn't Paul Graham say that if your coding is repetive then the problem isn't your project, it's that you're doing it wrong. Specifically you're operating at the wrong level of abstraction.

Because I found myself repetitively wrangling these ImageMagick commands into shape, I decided it was time to sidestep that problem and make a script to do the drudgery for me.

#!/bin/bash
if [ -z "$1" ]; then 
  echo usage: $0 infile left right [top] [bottom]
  exit
fi
echo -e "  opening \t ${1}"
BASE=`echo ${1} | sed 's/\(.*\).gif/\1/'`
L=$2
R=$3
T=${4-0} #use argv[4] or 0, if undef
B=${5-0} #use argv[5] or 0, if undef
W0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 1`
H0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 2`
echo -e "  current size \t ${W0}x${H0}"

let "W1 = $W0 - ($L + $R)"
let "H1 = $H0 - ($T + $B)"
echo -e "  new size \t ${W1}x${H1}"

NEWNAME=${BASE}crop.gif
echo -e "  saving to \t ${NEWNAME}"

`convert ${1} -coalesce -repage 0x0 -crop ${W1}x${H1}+${L}+${T} +repage ${NEWNAME}`

Simply save this as something like gifcrop.sh, and then run it like so:

$ gifcrop.sh in.gif 10 20 30 40

That will take 10 pixels off the left, 20 off the right, 30 from the top and 40 from the bottom. The result gets saved as incrop.gif. The final two arguments are optional, since most of the time I found myself adjusting the width but leaving the height alone. So these two commands are identical:

$ gifcrop.sh in.gif 10 20 0 0
$ gifcrop.sh in.gif 10 20

This all depends on the format of the results that ImageMagick gives you from the identify command, which is used to get the current size of the input image. You may need to adjust these two lines:

W0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 1`
H0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 2`

On my machine, identify foo.gif | head -1 gives me this output:

foo.gif[0] GIF 250x286 250x286+0+0 8-bit sRGB 128c 492KB 0.000u 0:00.030

The awk command isolates the 250x286 part, and the cut command pulls out the two dimensions from that.

Reel Shadows

Boid Animations

This is an abstract algorithmic animation project I've been working on. In a nutshell, it's my interpretation of what would happen if you projected a movie onto a flock of birds instead of a screen.

You can read about my methods and the videos that inspired it as well as see some final renders here. This video should give you a flavor.

Monet Blend

Generally speaking, I don't see in color well. Not that I'm color blind. But when I look at a scene I notice shape and form much more than I notice color. When I was a child my art teacher had to coax me into bothering to color anything in; I was satisfied with line drawings. I spent a lot of time designing and making paper models, but always out of plain, unadorned white cardboard.

There are exceptions to this preference for space over color. Monet, Turner and Rothko all make me sit up and notice color. I especially like the various series Monet did of the same scene from the same vantage point under varying conditions. I love seeing multiple works from a series side-by-side in galleries so I can compare them.

"Waterloo Bridge at Sunset, Pink Effect"
"Waterloo Bridge, Hazy Sunshine"

This project was born out of the desire to be able to look at multiple pieces in the same series of Monet paintings at the same time. Swiveling my head back and forth rapidly only works so well, and it earns me extra weird looks from other patrons. (Plus, I feel like I need to correct my weakness w.r.t. to color, and if I'm going to learn about color I might as well learn from the best.)

What I've done is write a program which loads two Monet images and blends them together. Just averaging the two would create a muddled, uninspired mess. So I use a noise function to decide at each pixel how much to draw from image 1 and how much from image 2.

Blending matrix
Combination of "Sunset, Pink Effect" and "Hazy Sunshine" using Blending matrix.

You can see an example of this blending matrix on the left. Darker pixels in the blending matrix will have a color more similar to "Sunset, Pink Effect," while lighter pixels are closer to "Hazy Sunshine." A pixel which is exactly 50% (halfway between white and black) will be given a color halfway between the color of the corresponding pixel in each image.

The blending matrix is a function of time, so the influence of each source image over the output changes over time, allowing me to see different parts of each source image over time.

By changing parameters I can control how smooth or muddled the noise is, how bimodal the distribution is, how fast it moves through time/space, etc.

Currently I'm using a simple linear interpolation between the two source images, which is then passed through a sigmoid function. There are at least a dozen other ways I could blend two colors. I need to explore them more thoroughly, but from what I've tried I like this approach. It's conceptually parsimonious and visually pleasing enough.

The examples above show colors interpolated in RGB space. The results are good, but can get a little wonky if the two source colors are too dissimilar. Interpolating between two colors is a bit of black magic. AFAICT there is no one gold standard way to go about it. I've tried using HSV space but wasn't too pleased with the results. After that I wrote some code which does the interpolation in CIE-Lab color space. I think the results are very slightly better than RGB, but it's difficult for me to tell. I'll render out a couple of examples using that technique and maybe you can judge for yourself.

If I wanted to get sophisticated about this I should also write in a method to do image registration on the sources images. I have another semi-completed project which could really use that, so once I get around to coding it for that project I'll transfer it over to this one as well. (Although that other project needs to do it on photographs, and this on paintings, and the latter is a lot trickier.)

Diffusion Patterns

This is an animation process inspired by Leo Villareal's and Jim Campbell's work with LEDs.

Jim Campbell, "Grand Central Station 4"
Leo Villareal, "Diamond Sea"

Since I have neither the money nor space for hardware, I'm settling for this.

A grid of nodes is initiated with random wirings between adjacent nodes. Each node is given a fixed amount of charge, and the charge flows between wired nodes over time. At each time step a random number of wires are created or destroyed, which keeps the system from settling into a fixed state.

I use a variation of Blinn's metaball algorithm to render each node, which I know isn't a great match for the look of LEDs when it comes to photorealism, but I like it for this purpose and I've been looking for an excuse to play around with metaballs anyway.

Metaballs are typically coded to have constant mass/charge/whatever and varying location. I've flipped that so their charge is variable and location is constant. Visually I think it's actually a pretty good match for the things Jim Campbell does with LEDs behind semi-opaque plexiglass sheets. (Or could be if I tweaked it with that in mind as a goal.)

I'd like to take this same rendering process and use it for a lot of other processes besides the charge-diffusion algorithm that's running in the background here.

Card Matching

As part of my research, I've been building a neural network system to play a memory game you might have played as a child. It's variously known as "Concentration," "Memory," "Pelmanism," "Pairs" and assorted other names. The basic idea is that several pairs of cards are face-down on a table, and you have to find all the pairs by turning over two cards at a time.

I want to compare my system's performance to that of humans, but don't have any information about human performance to use as a benchmark. To clear that hurdle, I built an online version of the game for people to play so I can record their behavior. You can play my card matching game by going here.

In addition to collecting data for my research, coding this game also gave me an excuse to learn jQuery. Stay tuned here for some other projects which make use of what I learned.

ORBS*COOPERATED
*O*I*A**A*O*A*A
INSTRUMENTALIST
*D*U*T**T*R*L*A
MENASIONS*KISER
*A*T*O*A*A*M**A
BUNION*TANTALUM
***O*A*I*T*G***
SCENARIO*INITIO
O**A*Y*N*C*N*M*
LOWLY*ESCALATOR
I*A*A*A**N*T*G*
DISENFRANCHISED
A*P*K*L**E*O*N*
YESTERYEAR*NAEF
A sample puzzle generated by WynneWord, using the British-style grid from the Wikipedia Crossword Puzzle article.

WynneWord

Crossword Generator

(Work in progress)

Way back in undergrad, one of my assignments was to write a program to automatically generate small crossword puzzles. (Just the grid of letters, not the associated clues.) I liked the solution I came up with then, but was consistently frustrated that I had only a few days to work on it. Ever since I've wanted to go back and do it right.

WynneWord is a small attempt to solve that problem better than I had time to then. (Of course I still don't have as much time as I wish to improve this.) One of the improvements over the approach I used in school is to store all the potential words in a trie. When I actually wrote the code for to do that I had no idea such data structures had an actual name. It wasn't until years after I had written the code out on paper and then later still coded it up that I realized I had stumbled upon something that already existed.

I'm waiting to post code and sample results until I find time to make at least one of the following improvements:

  1. Build a much larger corpus of words and especially phrases and proper nouns to draw from.
  2. Improve the back-tracking system so that ... you know what? I can't really explain this without a ton of background. For now I'll just leave it at "improved back-tracking."
  3. Parallelize part of the search procedure. (Although I don't think this problem lends itself terribly well to parallelization, there are parts of it that could potentially benefit.) This would mostly be an excuse to re-learn parallel programming, which is something I haven't needed to do in 5+ years.
Until then, here's a couple small samples of the puzzles I've generated. Starting with the input file on the left, WynneWord generated each of the filled-in grids.

13 13
...##....#...
.....#...#...
......#...##.
###...#......
..#...###...#
..#.#....#..#
.....###.....
#..#....#.#..
#...###...#..
......#...###
.##...#......
...#...#.....
...#....##...
***************
*BAR**AREN*ART*
*ARENA*ORE*RAI*
*RANGER*AIR**M*
****URE*SLATES*
*AR*NOT***NET**
*LA*I*IDLE*AH**
*LINSE***REMIC*
**NE*LIMB*A*CA*
**IRA***ART*ST*
*SEDUCE*ROA****
*I**LAN*RUGGED*
*NOT*REE*TERRI*
*GRO*EARL**OOP*
***************
***************
*SAN**ORTH*SAN*
*ERASE*ARA*ARA*
*EIGHTH*INC**S*
****ATE*EDITOR*
*AR*RAN***ARA**
*RA*P*NINE*AX**
*INDEX***RECAP*
**KI*IMAM*A*CO*
**INC***ALT*AL*
*SNEEZE*RAM****
*A**LAN*TSAKOS*
*LEA*REA*KNOTT*
*ARI*RAND**OTA*
***************

(Yes, some of the words are weird. That's a fault in the ad hoc corpus WynneWord currently reads as input, not the program itself. I think a lot of the words are drawn from financial reports and the Enron emails, which explains why there are a lot of obscure company names and various financial acronyms in my results.)

Soon I hope to post more examples, some discussion of my technique, and the code used.


PS – WynneWord is named after Arthur Wynne, who published the first English-language crossword puzzle. As a bonus, it reminds me of my favorite line from The Wasteland: "O you who turn the wheel and look to windward / Consider Phlebas, who was once handsome and tall as you."

Celtic Knots

I wanted to play around with non-photorealistic graphics, so I created a program to created a program to generate Celtic knot patterns.

You can see some of the results here, including several animations.

I was inspired in part by one of Craig Kaplan's course assignments for CS 791 at the University of Waterloo.

I know the whole point, historically, of Celtic knots is symmetry and covering the space available with fewer continuous curves, but I'm more interested in what happens with random knots. I like the balance of organic chaos and regimented structure that results.

Noise Portrait

"Noise Portrait" is an algorithmic animation I created. It gradually reveals a photo of myself (though of course you can supply any image as source data). The locations of the "brushes" are determined by Perlin Noise, hence the name. The size of the brushes gradually decrease, rendering the underlying photo in more detail. Eventually the brush size increases again, blurring things, before decreasing back to a more detailed mode, then blurring again, etc.

At right is a pre-recorded video showing one run of the animation. (Because of the random nature of Perlin Noise, every run is different.) If you would like to see a live version of the animation, there's a javascript version on this page that you can test out.

Noise Portrait was created using Processing. You can view the source code here.

paletteSOM

This is a Self-Organizing Map used to make an abstract animation based on the colors used in sets of images. This demo clip has been fed four JMW Turner paintings (two nautical sunsets and two of the burning of Parliament). There's a couple others on Vimeo.

Reading List Formatter

Coming soon (hopefully)

(This doesn't even rise to the level of a 'project,' as simple as it is. But I struggled for a while to get LaTeX to produce a reading list for my dissertation proposal int he format I wanted, so I finally rolled my own solution. It might be useful to others.)

Asterism

Again, this doesn't rise to 'project' level, just a snippet of LaTeX I put together so that I could use asterisms (⁂) when writing papers. I use them to mark off sections of text which will need further attention when editing.

As I said, this isn't really a project, but I'm putting it up here because hopefully it will lead to me cleaning up and posting more of the LaTeX macro file I've been piecing together over the last year. (And who knows, maybe there's some other STEM grad student who gets as excited over obscure typographical marks as I do.)


Updated: I've got a much, much simpler solution than the one I gave below, and it appears to get rid of the weird beginning-of-paragraph bug I ran in to. I haven't tested it extensively, but it seems to work better than the solution I posted previously, and it's certainly much easier to understand.

\newcommand{\asterism}{%
\makebox[1em][c]{%
\makebox[0pt][c]{\raisebox{-0.8ex}{\smash{**}}}%
\makebox[0pt][c]{\raisebox{0.2ex}{\smash{*}}}%
}}

For the record, here's the old version:

\newcommand{\asterism}{%
  \smash{%
    \begin{minipage}[t]{1.2em}%
      \centering%
      \begin{spacing}{1.0}%
        \raisebox{-.15em}{%
          \setlength{\tabcolsep}{.025em}%
          \renewcommand*{\arraystretch}{0.5}%
          \resizebox{1.05em}{!}{%
            \begin{tabular}{@{}cc@{}}%
              \multicolumn{2}{c}*\\[-0.5em]%
              *&*%
            \end{tabular}%
          }% end resizebox
        }% end raisebox
      \end{spacing}%
    \end{minipage}%
  }% end smash
}

There are other macros floating around out there that will create asterisms, but the ones I tried don't work if you're not using single-spacing/standard leading. This one will — best I can tell — in addition to working with different sized text, etc.


[ Oops. There is one issue with this I just discovered that I haven't ironed out yet. When you put \asterism at the front of a new paragraph LaTeX will begin an unindented line with the asterism, then start another new, indented paragraph for whatever comes after it. To stop this you can insert a non-breaking space ("~\asterism"), which won't take up any additional room but will make everything work correctly. Not ideal, but that's the work-around I'm using until I can dedicate some time to figuring this out. ]

Recommendations

You can access the entire list here, or go to a specific category below.

This isn't a list of my favorite things, but things I think other people should try out. I've tried to a certain extent to keep away from listing obvious things that lots of people already know about or like, since you don't need my suggestion for those. And if I were to just list my favorite things, the page would go on forever. So I've tried to keep the list down to things I've actually recommended to friends in conversation. None-the-less, I have a feeling it's going to grow pretty long.

Links