Building Network Maps from Conference Photographs

A network edge being formed during the morning break

As discussed in the previous blog post, Glorifying the Selfie, my co-conspirator Marc Smith came up with the idea of using photography to capture new connections made during the Data Leadership Summit. The concept being that people generally form connections in diads (1-1). If we tasked the photographer with trying to capture as many of those diads as possible throughout the day-long Summit, we could use the photos to create a new network map of the day.

Our intention with the Summit social network analysis was to prototype some new tools and methodologies that could later be used for much larger groups across all kinds of different conferences, creating a network data layer atop of the typical programming. In order to do this with large groups, we sought to discover which processes could be automated. While we could get by with human analysis for a group of 80 leaders, we recognized a group of say 5,000 people would prove more challenging.

Marc had identified facial recognition technology as a clear opportunity for automation. If we could teach the system who each person was, the software could then classify all the people in our diad photos, creating edge data. NodeXL could crunch the data, and voilà–we would have a new network map with minimal human input.

First, we’ll describe what we achieved by brute force, then go into the automation story.

When participants registered at the Summit, they received a name badge, and had a photo taken to build the name<–>badge association. Our node identification process was aided by having created name badges with large, bold typography that was visible from a distance.

Names on the badges were easily visible from afar while standing

In analyzing the photos, we immediately noticed an oversight. While the names were indeed visible on the badges from a distance, in many cases we couldn’t see the badge at all. It turned out that the lanyards were too long, thus the badges dipped below the table when the participants were seated. On the occasions where we could see the name badges, it was easy to label the nodes. On those occasions where we could not see the badges, we used the registration photos for identification. Due to this being an invite only event, I happened know what every participant looked like and could identify them without the photo reference.

The following is the resulting network map from connections made during the day.

Network map that resulted from manual data tagging and analysis

I am not going to spend much of this post analyzing the map, as the main focus of the post is about our process. However, a few interesting things to point out are:

  • The photo analysis indicated seven clusters, only some of which were members of the same assigned table groupings in the morning. This means that participants circulated and made connections throughout the day.
  • The participants with the highest betweenness centrality within the network before the event were typically not the same as those with the highest network centrality from the event itself. This signifies that many participants increased their centrality within the network.

Back to process…as I mentioned previously, we were interested in how the edge identification could be automated. While the security industry may be able to know exactly who each of us are from any angle of our face, off-the-shelf software seems to have a very difficult time picking faces out of a crowd, even a small crowd. Facebook has sophisticated facial recognition, but seemingly requires that people be within one’s network (friends or having liked your page), or have open privacy settings in order to be tagged. Because the majority of our network building has been on Twitter and LinkedIn rather than Facebook, Facebook didn’t seem to recognize most people in our diad photos.

We turned to iPhoto. I had played around with the Faces function before, and hoped that 9.6 would have brought advances to the sophistication of its image recognition technology. To say that we were sorely disappointed would be an understatement.

First I had to go through and tag each person from their registration photos. Understandable.

Tagging Rodger Lea so iPhoto would recognize him

After tagging all the participants, my hope was that I could then go into the Faces page and be able to click on any person, with all the photos of then collated.

iPhoto Faces shows all the people you have tagged

iPhoto had a terrible time matching those registration photos to the diads, resulting in almost no accurate labeling.

iPhoto not only did not find the other images with Adam Naamani, it confused Adam with Michael Fergusson and Peter McLachlan who were already tagged

Further, the more I tried to teach iPhoto by tagging more diad photos of that person, it did not seem to get any better at recognizing them.

Even with three different angles for John Lilleyman tagged, it still could not identify him and thought he was Mike Pedersen

This is not to say that we cannot use photography to create network maps. However, based on our experience, we are unlikely to be able to rely on off-the-shelf software to be smart enough to label our edges, nor be able to rely on facial recognition alone. It turns out that those names tags could be the easier way to create the edge data. With the support of Mechanical Turk’ers, we could turn around a large data set quickly and cost effectively (with shorter lanyards next time). We might also be better off using character recognition technology for the name badges, rather than facial recognition.

In the meantime, we have an insightful network map documenting connections made during the Summit, and a great learning experience in how to map network connections at conferences. We’re already looking forward to the next opportunity.

Adam Lerner

Adam is the Founder of Solvable where he works with broadminded leaders from industries caught in tornadoes of sweeping change. Prior to Solvable, Adam spent nearly a decade in design and brand consultancies across the US and Canada – Kaldor, Cause+Affect, M3 Design, frog – as a strategic planner providing research, brand, and technology insights. With a career that began in New York not-for-profits–the Solomon R. Guggenheim Museum, Eyebeam and US Fund for UNICEF–Adam has direct experience in member-driven organizations. Adam has an MBA from the University of Texas McCombs School of Business.