The technology behind the app actually is interesting. For instance, when you take a photo it measures the audio profile of the room, captures the compass reading and other sensor readings, and pretty accurately knows other users in the room at the same time. I could go into that more here, but really you should listen to the interview because this technology lets them build a new kind of “social camera.”
This is essentially how I've felt about this. The technology is fascinating, and end user application is engaging in a sort of from-now-on-we-travel-in-tubes sort of way, but the transition from concept to content is frustrating to say the least.
About two hours ago I sat at Eataly in Madison Square with a friend and fired up Color. It found him, and no one else. Madison Square on an early Saturday afternoon on one of the first nice days in NYC since last year, and it found one other person who was running the app solely because I told him to.
Five confused minutes later we stopped trying and moved on. The momentum necessary to make Color work is not realistic.
There must be enough people running color at the same time in the same place with the app open taking pictures of things for it to be interesting. Unlike Foursquare, which accumulates "check ins" over time, or Yelp that focuses on reviews and destination-status, Color requires real time simultaneous engagement of a large number of people in the same place, something I simply don't see working out in any but the most extreme cases.
Maybe the end goal is the development of a technology that can be used to provide vast amounts of data for targeted marketing to advertisers. 'Free Android Phone! Just run this app in the background that records from every sensor your phone has and sends it to the advertising mothership!'