Skip to main content

: Deep Learning

Comparing ResMem and MemNet

In my previous posts about Memorability (see the project link above), I’ve been talking about the performance of my models fairly matter-of-factly. I’ve been comparing their scores on things, reporting them in abstracts, and talking about how one model performs better than another, and why I think that is happening. Some questions arise though, for example, why did I get such a vastly different score with MemNet than what Khosla et al.

ResMem and M3M

In my last post on computer vision and memorability, I looked at an already existing model and started experimenting with variations on that architecture. The most successful attempts were those that use Residual Neural Networks. These are a type of deep neural network built to mimic specific visual structures in the brain. ResMem, one of the new models, uses a variation on ResNet in its architecture to leverage that optical identification power towards memorability estimation.

ResMem Release

A user-ready version of ResMem is now available on PyPI! The model included in the package is designed to estimate the memorability of an input image but is not intended for feature space analysis. The model is optimized for accuracy by allowing the ResNet features to retrain. The model included in the resmem package has been dubbed “ResMemRetrain” for this reason. Statistically, the retrained model performs better than the model where ResNet is as-is, receiving a Spearman rank correlation of 0.

MemNet: Models for Predicting Image Memorability

Memnet1 was an attempt to build a neural network-based model to predict the memorability of an image. This attempt was carried out by Khosla et al. at the Computer Science and Artificial Intelligence Labs at MIT to moderate success. It is the most commonly used neural network regression for this purpose, and has been used and cited in many research papers since publication. There are some problems, however. Memnet was built in Caffe, a deep learning framework which has been defunct since shortly after Memnet’s publication.

Plurals and ML

Plurals and Machine Learning Using older machine learning models to conjugate English verbs produced rather silly results. These models performed at an acceptable level for many words, but when given nonsense words as an input these models would produce humorous conjugations. For example, we have: Verb Human Generated Past-Tense Machine Generated Past-Tense mail mailed membled conflict conflicted conflafted wink winked wok quiver quivered quess satisfy satisfied sedderded smairf smairfed sprurice trilb tribled treelilt smeej smeejed leefloag frilg frilged freezled Naturally, my girlfriend and I found this hilarious.