Skip to content

Archives

Links for 2012-06-13

  • The Silencing of Maya : software patent shakedown threatens to remove a 4-year-old’s only means of verbal expression: ‘Maya can speak to us, clearly, for the first time in her life. We are hanging on her every word. We’ve learned that she loves talking about the days of the week, is weirdly interested in the weather, and likes to pretend that her toy princesses are driving the bus to school (sometimes) and to work (other times). This app has not only allowed her to communicate her needs, but her thoughts as well. It’s given us the gift of getting to know our child on a totally different level. I’ve been so busy embracing this new reality and celebrating, that I kind of forgot that there was an ongoing lawsuit, until last Monday. When Speak for Yourself was removed from the iTunes store.’
    (tags: speak-for-yourself children law swpats patenting stories ipad apps)

  • _Building High-level Features Using Large Scale Unsupervised Learning_ [paper, PDF] : “We consider the problem of building highlevel, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images using unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also ?nd that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8% accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art.”
    (tags: algorithms machine-learning neural-networks sgd labelling training unlabelled-learning google research papers pdf)

Comments closed