Benefits This course provides an overview of Social Network Analysis (SNA) and demonstrates through theory and practical case studies how it can be used in HCI (especially computer-mediated communication and CSCW) research and practise. This topic is of particular importance due to the popularity of social networking websites (e.g. YouTube, Facebook, MySpace etc.) and social computing. As people increasingly use online communities for social interaction, new methods are needed to study these phenomena. SNA is a valuable contribution to HCI research as it gives an opportunity to rigorously study the complex patterns of online communication.
Social network theory views a network as a group of actors who are connected by a set of relationships. Actors are often people, but can also be nations, organizations, objects etc. Social Network Analysis (SNA) focuses on patterns of relations between these actors. It seeks to describe networks of relations as fully as possible. This includes teasing out the prominent patterns in such networks, tracing the flow of information through them, and discovering what effects these relations and networks have on people and organizations. It can therefore be used to study network patterns of organizations, ideas, and people that are connected via various means in an online environment. Continue reading →
Last week I made a video with Norm Rose from Travel Tech Consulting about the ways different airlines get talked about in twitter. Norm explores new technologies that impact the travel business and he asked me to create two maps: one for United Airlines and the other for Delta Airlines. In the video below, we discuss these maps and what they mean for any kind of brand engaging in social media.
United has a large twitter mentioning population and has a bigger main component. The larger profile photos indicate Twitter authors with many followers. The large population of isolates or small components at the bottom of both images are people who mention either airline but are not in a conversation or a relationship with someone else who also mentioned these brands. They are “shouts” about a brand, not conversations. In contrast the large component in both images are the connected collection of people who talk to people who talk about these brands. They are committed to the topic in a way the less connected authors are not. They know someone who also talks about air travel.
Viewed over time, we can start to assess the ways these brand’s twitter populations are changing. Are new people moving into central hub positions? Are people who held those positions drifting away?
A key observation is that some people with relatively few followers occupy highly central positions in the graph. This suggests that these authors have a location that lends their content attention from other well positioned people. Popularity is not just about volume of connections, in a social network perspective, importance is also a function of where the person sits within the graph.
Talks from SenseNetworks and MIT made the vision of a continuous “trail” document assembled by location and biological sensors from every human on earth seem not so outlandish. Samuel Madden from MIT spoke about opportunistic mobile wifi connectivity in moving vehicles. MIT rebuilt the WiFi stack to enable 13ms associations instead of 13 second associations with an access point. The result is that a car with such a WiFi card can drive along Boston city streets and exchange about 200KB a minute with open unsecured access points along the way. Free bandwidth in the city. What do they do with it? They stream live telemetry of a fleet of cabs. The cabs have accelerometers on them and GPS which is reported in almost real time back to a server. Along with the engine computer’s data, they collect a ton of data about traffic and road surface quality. They can see changing patterns in the activity levels of the cabs and infer changing activity at businesses.
A major theme of several presentations was crowdsourcing for science, with talks about ebird.org and galaxyzoo highlighting a distinction between sites that enable a group to collect data (ebird) – with the associated issues of data validity — and those sites that enable a group to annotate data (galaxyzoo) that has already been expertly collected.
Matthew Salganik: Community-Generated and Community-Sorted Information In his presentation Matt made the remarkable connection between deliberative democracy and the cat comparrison site: Kitten Wars. His talk introduced a model for a kind of Am I Hot or Not for political discussions. His group built a web site that helped the student community at Princeton set its priorities for student government. The work has significant implications fo deliberation tools for organizations and enterprises. Unlike systems that simply encourage users to contribute ideas to a potentially long and never acted upon list, this system forces a comparison task that can be performed in one click but demands implicit contrasts and estimation of value. The use of the almost adictive “hot or not” style interface (or more accurately, kittenwars) allows users to decide between, for example, longer hours for the student cafeteria or expanded video rental services, and get presented with their estimate in the context of other’s choices and the opportunity to choose between two things again. After a population has run through a set of pair-wise contrasts a broader sense of the priorities of the community can be calculated.
In my talk, I focused on the idea that information want not to be free or expensive, rather, information wants to be copied. Like DNA, the goal of any string of bits is to make a duplicate copy of themselves. Several technical realities mean that while information may exist on a spectrum from private to public, it only moves in one direction (public) and almost never back. Once made public on the Internet, even if only for a moment, a photo, document, or other digital object is almost certainly to have been copied, indexed, backed up, or replicated. All efforts to delete a digital object once widely distributed is like trying to take wine out of water. This is because all cryptography become brittle over time, most bits end up exposed after they get distributed, and more events trigger widespread distribution of bits than expected (for example, linking a photo, and a location, to a tweet that gets copied to LinkedIn and Facebook, that then appears in an RSS feed and is copied from there to Friend Feed. As it travels, information looses more of the access controls that initially made it relatively private until it is effectively public.
Sadly, no picture for Luis Van Ahn’s talk: however, this presentation was a fascinating review of the capcha and re-capcha services and the new direction of providing translation services as language learning games. Luis Van Ahn invented capcha, felt bad about the cumulative human time wasted by filling out those squiggle word puzzels to get on a web site, and decided to harness capchas to a useful task: text recognition for books. To translate words from bad scans of books that the OCR software fails to recognize correctly, the garbled data is presented to humans, who, collectively, have translated millions of previously unintelligible words. Now, his new project is to expand the small user population of bi or multilingual speakers who can translate between languages. The approach applies the “Mechanical Turk” “human intelligence task” concept to language translation. His language translation service presents foreign language sentences to users with all dictionary words from a simple translation listed below. Users click on best word selection beneath each foreign word. The surprising results: pretty good translations AND users start learning a foreign language!
Kate Niederhoffer and I presented a combined view of social media from the perspectives of social psychology and sociology. Kate applies a linguistic background to analyze the content of social media while I bring social network analysis to bear on the structures created by connections created by links and replies.
We got some great questions from the attendees about how they can apply these approaches to their social media investments. Tools like NodeXL can be helpful for people interested in structural data analysis. Kate has built tools for performing semantic analysis of social media content over time.