May '13: UMD's Center for Complexity in Business is looking to hire a postdoc and a couple of undergrad researchers. We hire from a wide range of backgrounds: right now we've got CS, Physics, Applied Math, Econ, and Marketing. Please forward them to anyone you know who might be interested; the sooner these positions get filled the sooner I can stop juggling seventeen dozen different projects ;)
April '13: My Neural Networks paper on autonomous learning has finally been released. Out of respect for the editors I won't mention how long it's taken from planting to harvest ;) You can grab a copy below or on Science Direct.
March '13: I've started blogging. (Well, blogging under my own name. I've been doing the incognito thing for a long while.) I'm thinking the theme will mostly be cross-over between CS and business, but I'll also post some things like book reviews and other overflow content that doesn't really fit here. Check it out.
February '13: I'm giving a presentation to the Marketing Dept's biweekly "Quant Lunch" on Monday 25 February in order to solicit some feedback on the modeling & prediction project I've been working on for Smart Bomb Interactive. If you're interested in social networks, business, and machine learning stop by Smith 2509 at 12:00.
September '12: Want to help with my research? In order to compare one of my neural network systems against human performance on a memory game, I need actual human data. That's where you come in. You can go to this website I created to simultaneously have nostalgic fun playing a memory game you probably haven't played since childhood while also helping my research. Come on, do it for Science.
August '12: I've been teaching myself jQuery over the last couple of weeks, and have added some small, dynamic gimcrackery to this site. I've only tested them on Chrome, Firefox and Safari on my Mac, and only thoroughly on the foremost of those, so if there's some weird stuff going on for you... sorry. I spent enough hours back in '98–'02 doing cross-browser compatibility testing to last me a lifetime, and I'm in no mood to go find a Windows box to try out my little gewgaws in IE.
(PS: Where was jQuery a decade ago? I can't begin to describe how much hair-pulling and teeth-gnashing it would have saved me back then.)
(PPS: Speaking of cross-browser testing, I have no idea how the typography looks on Windows machines. I've spent a bit of time getting it looking the way I want in Chrome/Mac, but I make no promises about how other machines will render things, especially w.r.t. to letter weights.)
July '12: It looks like I'm going to be charting a new course. Due to some budgetary circumstances beyond our control CASL has shut down their computational modeling efforts, which has ended my funding. Less than a week after I got that bad news though, I managed to find an RA position with Bill Rand, Director of the Center for Complexity in Business at UMD's Smith School of Business. I'll be working on social media modeling with Bill, along with professors Louiqa Raschid (Information Systems) and Yogesh Joshi (Marketing). This is definitely an unscheduled detour for me, but it's an exciting one.
May '12: We have a puppy! Everyone, this is Bonnie, our new three-month-old Westie.
I'm a doctoral candidate at the University of Maryland, College Park, in the Department of Computer Science. My main research interest is in biologically-inspired computing, primarily neural networks at this point, though I've done some evolutionary computation in the past as well. More specifically, my dissertation work is on executive function and cognitive control (working memory, decision making, etc.) using neural-inspired systems rather than rule-based ones.
I was graduated in 2006 from the Computer Science and Engineering department at Notre Dame, where I lived in Zahm House.
Outside of academics, I'm interested in economics (especially the George Mason variety) and political philosophy. Film and art — especially sculture, animation and algorithmic art — are also big interests. Recently I've also taken up an interest in baking bread. Our new Westie puppy also takes up a bit of my time.
Résumé / C.V.
Last updated March, 2013.
As of fall semester 2013.
- Interactive CV
- Fall 2006
- CMSC 727: Neural Computation (Reggia)
- CMSC 725: Geographic & Spatial Information Systems (Samet)
- CMSC 828P: Cognitive Science & AI (Perlis)
- Spring 2007
- Fall 2007
- CMSC 726: Machine Learning (Reggia)
- ENEE 633 (aka CMSC 828C): Statistical Pattern Recognition (Chellappa)
- Spring 2008
- CMSC 838V: Creativity Support Tools (Sazawal)
- Spring 2009
- BMGT 808L: Complex Systems in Business (Rand)
- Spring 2010
- CMSC 828X: Nature-Inspired Artificial Intelligence (Reggia)
- Fall 2010
- CMSC 858F: Algorithmic Game Theory (HajiAghayi) — audited
I am currently working with Jim Reggia on exploring neural models of cognitive control. Most cognitive control models are built using symbolic, rule-based paradigms. Such systems are both biologically implausible and often tend towards the homuncular.
What neural models do exist are typically very narrowly designed for a particular task and require a great deal of human intervention to tailor them to the objective as well as exhibiting problems scaling to larger problem spaces.
I am exploring creating a more generalizable model of cognitive control using a neural paradigm by creating networks which learn not only memories of environmental stimuli but also the steps necessary for completing the task. The steps are stored in a memory formed by a sequential attractor network I developed so that they can be visited in order. I call my model GALIS, for "Gated Attractor Learning Instruction Sequences."
By generating behavior from the learned contents of a memory rather than the explicit structure of the network itself I believe it will be much easier for the model's behavior to change. Rather than having to rebuild the "hardware" of the network, you can instead load different "software" by training the memory on different patterns. Furthermore, making the model's behavior readily mutable opens the door to it improving its performance as it gains experience. That, in turn, should allow the model to learn the behavior necessary to completing a task on its own.
Basing behavior on memory contents rather than architecture is not unlike the shift from clockwork automata like Vaucanson's "Digesting Duck" to the Jacquard Loom. The latter was an important step in the history of computation because its behavior could be changed simply by swapping in a different set of punchcards — i.e., by changing the contents of its memory. Of course GALIS surpasses the Jacquard loom because the loom was only able to follow instructions, not conduct any computation of its own. GALIS, on the other hand, determines endogenously when and how to modify its working memory, produce outputs, etc.
In addition to my dissertation research I'm also working with Bill Rand in UMD's Smith School of Business's Center for Complexity in Business. I'm working on a couple of projects, but the main one for me is an effort to model social interactions in an MMORPG with a freemium business model. Our goal is to be able to model who will convert from free to a paid user based on there location in the in-game social graph and the characteristics of them and their friends. We're using a variety of techniques, including agent-based modeling, logistic regressions and assorted machine learning techniques.
Prior to GALIS I worked with Jim on two other projects. The first is a computational model of working memory formation. This is being done in conjunction with a wide-ranging study at UMD's Center for Advanced Study of Language into the role of working memory in language tasks.
This study of working memory lead into the cognitive control research I am doing now. I have also used machine learning methods to analyze the results of some CASL studies to see if it is possible to determine who will benefit from working memory training based on pre-test results. Please see the 2011 tech report below for more.
The second project, begun in Spring 2007, deals with symmetries in topographic Self-Organizing Maps. By limiting the radius of competition and choosing multiple winners for standard Hebbian learning we can generate cotices with global patterns of symmetric maps. Please see the 2009 Neural Computation paper below for details.
As an undergrad I did Machine Learning research. I worked on creating and testing a system called EVEN, for "Evolutionary Ensembles." It is a genetic algorithm framework for combining multiple classifiers for machine learning and data mining. It is very flexible, with the ability to combine any type of base classifiers using different fitness metrics. This work was done with
Nitesh Chawla, who advised me for my final two years at Notre Dame.
, Reggia, J., Weems, S., and Bunting, M. "Controlling Working Memory with Learned Instructions." Neural Networks
, vol. 41, Special Issue on Autonomous Learning, pp. 23–38. 2013. [ link, pdf, BibTeX ]
, and Reggia, J. "Plasticity-Induced Symmetry Relationships Between Adjacent Self-Organizing Topographic Maps." Neural Computation
, vol. 21(12), pp. 3429–3443. 2009. [ link, pdf, BibTeX ]
Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." Submitted, May 2013.
Sylvester, J., & Reggia, J. "The Neural Executive: Can Gated Attractor Networks Account for Cognitive Control?" Submitted, April 2013.
Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Submitted, April 2013.
, Reggia, J., & Weems, S. "Cognitive Control as a Gated Cortical Net." Proc. of the Int'l Conf. on Biologically Inspired Cognitive Architectures
, pp. 371–376. Alexandria, VA, August 2011. [ pdf, BibTeX, slides ]
, Reggia, J., Weems, S., & Bunting, M. "A Temporally Asymmetric Hebbian Network for Sequential Working Memory." Proc. of the Int'l Conf. on Cognitive Modeling
, pp. 241–246. Philadelphia, PA, August 2010. [ pdf, BibTeX ]
Reggia, J., Sylvester, J.
, Weems, S., & Bunting, M. "A Simple Oscillatory Short-term Memory Model." Proc. of the Biologically-Inspired Cognitive Architecture Symposium
, AAAI Fall Symposium Series, pp. 103–108. Arlington, VA, 2009. [ pdf, BibTeX ]
, Weems, S., Reggia, J., Bunting, M., & Harbison, I. "Modeling Interactions Between Interference and Decay During the Serial Recall of Temporal Sequences." Proc. of the Psychonomic Society
Annual Meeting, November 2009. [ pdf, BibTeX ]
Chawla, N., & Sylvester, J.
"Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets." Proc. of Multiple Classifier Systems, pp. 397–406. 2007. [ pdf, BibTeX ]
, & Chawla, N. "Evolutionary Ensemble Creation and Thinning." Proc. of IEEE IJCNN/WCCI, pp. 5148–55. 2006. [ pdf, BibTeX ]
, & Chawla, N. "Evolutionary Ensembles: Combining Learning Agents using Genetic Algorithms." Proc. of AAAI Workshop on Multi-agent Systems, pp. 46–51. 2005. [ pdf, BibTeX ]
, Reggia, J., & Weems, S. "Predicting improvement on working memory tasks with machine learning techniques." UMD Center for Adv. Study of Languages. Technical Report. 2011. [ pdf ]
"Maximizing Diffusion on Dynamic Social Networks." 2009. Submitted to satisfy the requirements for my Master's in CS. Originally written as a final project report for BMGT 808L (Complex Systems in Business). [ pdf ]
"Neurocognitive Architecture Case Study: GALIS." An informal guest lecture in CMSC 727
(Neural Computation). May 2013. [ pdf ]
"Attractor Network Models for Cognitive Control." Given for CASL's Lunch Lecture series. College Park, MD. March 13, 2012.
"Modeling Cognitive Control of Working Memory as a Gated Cortical Network." Invited talk at the First Int'l Workshop on Cognitive and Working Memory Training. Introduction by Jim Reggia. Hyattsville, MD. August 23–25, 2011. [ pdf ]
— A chapter based on this material is in preperation.
"Oscillatory Neural Network Models of Sequential Short-Term Memory." Given for CASL's Lunch Lecture series. Introduction by Scott Weems. College Park, MD. June 15, 2010. [ pdf ]
The Jared Watch
This is a sample of some of the media I've been enjoying when I'm not in the lab. Click the triangles to show some brief comments for each entry, or click to show all of them. You can find older entries on my Archives page.
- What has Jared been reading recently?
- Double Entry: How the Merchants of Venice Created Modern Finance, Jane Gleeson-White
The historical aspect was much more interesting than the discussion of accounting's place in contemporary society. In the last several chapters in particular — especially those dealing with GDP, HDI and the environment — Gleeson-White conflates problems of property rights with problems of counting GDP. (E.g. if some Sri Lankan subsistence fishermen are kicked off the coast so their government can build hotels, is that primarily (a) the fault of GDP accounting, which counts tourism dollars but not non-market fishing production, or (b) a problem with eminent domain abuse?)
In a book about accounting, I would expect the dollar figures she reports for things like tree-lined sidewalks or cultural artifacts to have undergone some basic economic scrutiny, but they do not. She commits, for example, the same error of ignoring the unseen that advocates for publicly-financed stadiums do.
Despite my griping, the first two thirds of the book dealing with the history of numeracy and mathematical education and business practices in Renassiance Italy are superb.
- Annabel Scheme, Robin Sloan
The title character's job is given on the first page as "Investigator: Digital & Occult." The case she's invesitgating here involves possesed quantum computers, an inter-dimensional auction website, an MMORPG where some of the players are ghosts, and a brawl in a coffee bar between hackers and a demon. I would read a lot more of these stories if Sloan writes them. It's also nice to read a story with tech in it written by someone who actually knows how to code (e.g.).
- Design for Hackers: Reverse Engineering Beauty, David Kadavy
A very good introduction. If you've made some effort to learn design then a lot of this won't be new to you, but there is still stuff to learn here. It finds a good middle ground between being overly theoretical and being too much of a how-to guide. Appropriately selecting font sizes using proportions was well explained. Kadavy also did a better job explaining color theory than I had seen before. The examples using Monet were particularly helpful.
- 14, Peter Clines
I don't even want to say "it's like FOO meets BAR with some BAZ and a little QUX thrown in" because filling in those variables would give a lot away. It's not the best written thing, but on Mamet's understand/endure/escape model of fiction, this does score pretty well on the escape axis.
- Super Crunchers: Why Thinking-By-Numbers is the New Way To Be Smart, Ian Ayers
Not that informative for me, but I'm not the intended audience. It was useful to read in order to know what other people know about "Big Data" though. (Side note: I'm glad the field of Data Mining managed to re-brand itself. I think "Big Data" sounds silly and will sound dated soon, but "Data Mining" was unfortunately and unfairly saddled with too much negative connotation among general audiences.) I appreciated that he treated neural networks in addition to regression techniques. I very rarely see anyone, even very sophisticated authors, move beyond regression when discussing this for general audiences.
PS I find it highly annoying when an author coins a new terms for the title to their book and then uses that neologism throughout the book. It always reminds me of the teenager who's trying too hard to get a new slang term of their own devising to catch on. Don't be that guy.
- The Dinner, Herman Koch
Based on its features it's a safe bet that I would not like this, but that prediction would have been wrong. In a way that makes it even more enjoyable. I especially appreciated how the narrator's mental state was only gradually revealed. All the flashbacks and digressions would usually have annoyed me, but Koch handled them exceptionally well.
- Management in Ten Words, Terry Leahy
Better than most CEO-authored books, but ultimately full of standard advice. It's important to value your customers and employees; seize opportunities but don't rush in without planning; etc. We get it.
- Ragnarok: The End of the Gods, S.A. Byatt
This is part of the Canongate Myth Series, which has contemporary authors re-telling ancient myths. I soaked up all the Greco-Roman mythology I could get as a kid but have never immersed myself in other mythos despite an interest. Having them actually presented as fiction like Byatt does worked better than reading it as non-fiction. Previous sources I've tried are either superficial or fractally labyrinthine. I think the framing story Byatt chose was a little superfluous, thought it gets points for lyricism. As a whole it was certainly good enough for me to pick up other books in the series.
- Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society, Jim Manzi
This was great. I have far too much to say about it to confine myself to a paragraph here, so I'll need to put up a blog post about this pronto.
- Stockholm Octavo, Karen Engelmann
I ended up reading this based on several "Customers who bought this also bought..." messages, and it was not what I expected based on the items that lead to the recommendation. A little ponderous — the events don't live up to potential Engelmann created with the setting and characters and intrigues — but still good.
- Thank You Jeeves, PG Wodehouse
I can't think about these stories without seeing and hearing Hugh Laurie and Stephen Fry in my mind.
- What has Jared been watching recently?
- House of Cards (2013), Beau Willimon, David Fincher, et al.
I'm very glad Netflix released all 13 episodes at once rather than doing things the traditional way. I haven't had a good show to binge watch in ages. (Which has been great for my reading habit, but still.)
- Oz the Great and Powerful, Sam Raimi
Eh. The visuals were quite well done but never knocked my socks off, which is what I needed to compensate for this lackluster story.
- Vikings, Michael Hirst
Much better than I expected for the History Channel's rookie effort. I'm particularly impressed that the Norsemen are presented as convincingly Alien. They're neither bloodthirsty barbarians nor gentle folk misunderstood by history, which are the two approaches I would have expected. Sacking Lindisfarne Abbey was neither a big misunderstanding nor the act of madmen who just want to watch the world burn. It's just they kind of thing you do if you're in that culture in that time in that place.
- The Dark Knight Rises, Christopher Nolan
Too much to say about this to fit in here. I'll write up my thoughts on my blog. Short version: Better than all other super hero movies (and most contemporary action movies of any kind), with the big exception of The Dark Knight. People — myself included — set a very high bar for this film. I liked it quite a bit, but it ended up disappointing me more than it rationally should have, especially since it's exactly what I expected to get.
- House of Lies, Matthew Carnahan
Season 1 was just okay. I mostly enjoyed it because it gave Don Cheadle a chance to be Don Cheadle. Season 2 has been even more underwhelming. There's nothing specific to consulting about this, which I think is a wasted opportunity. It could be any white collar workplace drama: law firm, ad agency, whatever. I'm unlikely to bother with Season 3.
- Veep, Armando Iannucci
Season 2 just started. If you haven't seen the first one yet then do so now. Four hours very well spent. Iannucci understands politics better than almost anyone else who's attempted to dramatize it. As Jesse Thorn pointed out on a recent interview, the characters are screwing up constantly in one farcical disaster after another, but they aren't written as bumbling clown-idiots. That would have been too easy, and not as funny as the competent screw-ups they are.
- What has Jared been listening to recently?
Here are some projects I've fooled around with in my spare time, both scientific and artistic.
This is an abstract algorithmic animation project I've been working on. In a nutshell, it's my interpretation of what would happen if you projected a movie onto a flock of birds instead of a screen.
You can read about my methods and the videos that inspired it as well as see some final renders here. This video should give you a flavor.
Generally speaking, I don't see in color well. Not that I'm color blind. But when I look at a scene I notice shape and form much more than I notice color. When I was a child my art teacher had to coax me into bothering to color anything in; I was satisfied with line drawings. I spent a lot of time designing and making paper models, but always out of plain, unadorned white cardboard.
There are exceptions to this preference for space over color.
Monet, Turner and Rothko all make me sit up and notice color. I especially like the various
series Monet did of the same scene from the same vantage point under varying conditions.
I love seeing multiple works from a series side-by-side in galleries so I can compare them.
"Waterloo Bridge at Sunset, Pink Effect"
"Waterloo Bridge, Hazy Sunshine"
This project was born out of the desire to be able to look at multiple pieces in the
same series of Monet paintings at the same time. Swiveling my head back and forth rapidly
only works so well, and it earns me extra weird looks from other patrons.
(Plus, I feel like I need to correct
my weakness w.r.t. to color, and if I'm going to learn about color I might as well learn
from the best.)
What I've done is write a program which loads two Monet images and blends them together.
Just averaging the two would create a muddled, uninspired mess. So I use a noise function
to decide at each pixel how much to draw from image 1 and how much from image 2.
Combination of "Sunset, Pink Effect" and "Hazy Sunshine" using Blending matrix.
You can see an example of this blending matrix on the left.
Darker pixels in the blending matrix will have a color more similar to "Sunset, Pink Effect,"
while lighter pixels are closer to "Hazy Sunshine." A pixel which is exactly 50%
(halfway between white and black) will be given a color halfway between the color of
the corresponding pixel in each image.
The blending matrix is a function of time, so the influence of each source image over
the output changes over time, allowing me to see different parts of each source image
By changing parameters I can control how smooth
or muddled the noise is, how bimodal the distribution is, how fast it moves through
Currently I'm using a simple linear interpolation between the two source images,
which is then passed through a sigmoid function. There are at least a dozen other ways
I could blend two colors. I need to explore them more thoroughly, but from what I've tried
I like this approach. It's conceptually parsimonious and visually pleasing enough.
The examples above show colors interpolated in RGB space.
The results are good, but can get a little wonky if the two source colors are too
dissimilar. Interpolating between two colors is a bit of black magic. AFAICT there is no
one gold standard way to go about it. I've tried using HSV space
but wasn't too pleased with the results. After that I wrote some code which does the
interpolation in CIE-Lab color space.
I think the results are very slightly better than RGB, but it's difficult for me to tell.
I'll render out a couple of examples using that technique and maybe you can judge for yourself.
If I wanted to get sophisticated about this I should also write in a method to do
image registration on the
sources images. I have another semi-completed project which could really use that, so
once I get around to coding it for that project I'll transfer it over to this one as well. (Although that other project needs to do it on photographs, and this on paintings, and the latter is a lot trickier.)
This is an animation process inspired by Leo Villareal's and Jim Campbell's work with LEDs.
Jim Campbell, "Grand Central Station 4"
Leo Villareal, "Diamond Sea"
Since I have neither the money nor space for hardware, I'm settling for this.
A grid of nodes is initiated with random wirings between adjacent nodes. Each node
is given a fixed amount of charge, and the charge flows between wired nodes
over time. At each time step a random number of wires are created or destroyed,
which keeps the system from settling into a fixed state.
I use a variation of Blinn's metaball
algorithm to render each node, which I know isn't a great match for the look of
LEDs when it comes to photorealism, but I like it for this purpose and I've been looking
for an excuse to play around with metaballs anyway.
Metaballs are typically
coded to have constant mass/charge/whatever and varying location. I've flipped
that so their charge is variable and location is constant. Visually I think it's actually a
pretty good match for the things Jim Campbell does with LEDs behind semi-opaque plexiglass
sheets. (Or could be if I tweaked it with that in mind as a goal.)
I'd like to take this same rendering process and use it for a lot of other processes
besides the charge-diffusion algorithm that's running in the background here.
As part of my research, I've been building a neural network system to play a memory game you might have played as a child. It's variously known as "Concentration," "Memory," "Pelmanism," "Pairs" and assorted other names. The basic idea is that several pairs of cards are face-down on a table, and you have to find all the pairs by turning over two cards at a time.
I want to compare my system's performance to that of humans, but don't have any information about human performance to use as a benchmark. To clear that hurdle, I built an online version of the game for people to play so I can record their behavior. You can play my card matching game by going here.
In addition to collecting data for my research, coding this game also gave me an excuse to learn jQuery. Stay tuned here for some other projects which make use of what I learned.
WynneWord Crossword Generator
(Work in progress)
Way back in undergrad, one of my assignments was to write a program to automatically generate small crossword puzzles. (Just the grid of letters, not the associated clues.) I liked the solution I came up with then, but was consistently frustrated that I had only a few days to work on it. Ever since I've wanted to go back and do it right.
WynneWord is a small attempt to solve that problem better than I had time to then. (Of course I still don't have as much time as I wish to improve this.) One of the improvements over the approach I used in school is to store all the potential words in a trie. When I actually wrote the code for to do that I had no idea such data structures had an actual name. It wasn't until years after I had written the code out on paper and then later still coded it up that I realized I had stumbled upon something that already existed.
I'm waiting to post code and sample results until I find time to make at least one of the following improvements:
- Build a much larger corpus of words and especially phrases and proper nouns to draw from.
- Improve the back-tracking system so that ... you know what? I can't really explain this without a ton of background. For now I'll just leave it at "improved back-tracking."
- Parallelize part of the search procedure. (Although I don't think this problem lends itself terribly well to parallelization, there are parts of it that could potentially benefit.) This would mostly be an excuse to re-learn parallel programming, which is something I haven't needed to do in 5+ years.
Until then, here's a couple small samples of the puzzles I've generated. Starting with the input file on the left, WynneWord generated each of the filled-in grids.
(Yes, some of the words are weird. That's a fault in the ad hoc corpus WynneWord currently reads as input, not the program itself. I think a lot of the words are drawn from financial reports and the Enron emails, which explains why there are a lot of obscure company names and various financial acronyms in my results.)
Soon I hope to post more examples, some discussion of my technique, and the code used.
PS – WynneWord is named after Arthur Wynne, who published the first English-language crossword puzzle. As a bonus, it reminds me of my favorite line from The Wasteland: "O you who turn the wheel and look to windward / Consider Phlebas, who was once handsome and tall as you."
I wanted to play around with non-photorealistic graphics, so I created a program to created a program to generate Celtic knot patterns.
You can see some of the results here, including several animations.
I was inspired in part by one of Craig Kaplan's course assignments for CS 791 at the University of Waterloo.
I know the whole point, historically, of Celtic knots is symmetry and covering the space available with fewer continuous curves, but I'm more interested in what happens with random knots. I like the balance of organic chaos and regimented structure that results.
"Noise Portrait" is an algorithmic animation I created. It gradually reveals a photo of myself (though of course you can supply any image as source data). The locations of the "brushes" are determined by Perlin Noise, hence the name. The size of the brushes gradually decrease, rendering the underlying photo in more detail. Eventually the brush size increases again, bluring things, before decreasing back to a more detailed mode, then blurring again, etc.
At right is a pre-recorded video showing one run of the animation. (Because of the random nature of Perlin Noise, every run is different.) If you would like to see a live version of the animation, there is an embedded applet on this page that you can test out.
Noise Portrait was created using Processing. You can view the source code here.
I also need to re-render the still frames of this into video using ffmpeg. I've tried my most recent project using that and the difference compared to the older workflow which is responsible for the above is huge.
This is a Self-Organizing Map used to make an abstract animation based on the colors used in sets of images. This demo clip has been fed four JMW Turner paintings (two nautical sunsets and two of the burning of Parliament). There's a couple others on Vimeo.
I'll post the code and some other samples when I get a moment. Like paletteSOM, I need to re-render this from stills using ffmpeg when I get the chance. Using it has made me realize exactly how embarassingly blurred this render is.
Reading List Formatter
Coming soon (hopefully)
(This doesn't even rise to the level of a 'project,' as simple as it is. But I struggled for a while to get LaTeX to produce a reading list for my dissertation proposal int he format I wanted, so I finally rolled my own solution. It might be useful to others.)
Again, this doesn't rise to 'project' level, just a snippet of LaTeX I put together so that I could use asterisms (⁂) when writing papers. I use them to mark off sections of text which will need further attention when editing.
As I said, this isn't really a project, but I'm putting it up here because hopefully it will lead to me cleaning up and posting more of the macro file I've been piecing together over the last year.
}% end resizebox
}% end raisebox
}% end smash
There are other macros floating around out there that will create asterisms, but the ones I tried don't work if you're not using single-spacing, standard leading. This one will — best I can tell — in addition to working with different sized text, etc.
[ Oops. There is one issue with this I just discovered that I haven't ironed out yet. When you put \asterism at the front of a new paragraph LaTeX will begin an unindented line with the asterism, then start another new, indented paragraph for whatever comes after it. To stop this you can insert a non-breaking space ("~\asterism"), which won't take up any additional room but will make everything work correctly. Not ideal, but that's the work-around I'm using until I can dedicate some time to figuring this out. ]
You can access the entire list here, or go to a specific category below.
This isn't a list of my favorite things, but things I think other people should try out.
I've tried to a certain extent to keep away from listing obvious things that lots of people already know about or like, since you don't need my suggestion for those. And if I were to just list my favorite things, the page would go on forever. So I've tried to keep the list down to things I've actually recommended to friends in conversation. None-the-less, I have a feeling it's going to grow pretty long.