ThreadMill 0.1: Social Accounting for Message Thread Collections

The Social Media Research Foundation is pleased to announce the immediate availability of ThreadMill 0.1.  ThreadMill is a free and open application that consumes message thread data and produces reports about each author, thread, forum, and board along with visualizations of the patterns of connection and activity.  ThreadMill is written in Ruby, and depends on MongoDB, SinatraRB, HAML, and Flash to collect, analyze, and report data about collections of conversation threads.

Threaded conversations are a major form of social media.  Message boards, email and email lists, twitter, blog comments, text messages, and discussion forums are all social media systems built around the message thread data structure.  As messages are exchanged through these systems, some messages are sent as a reply to a particular previous message.  As messages are sent in reply to prior messages, chains of messages form.  Message chains come in two major forms: branching and non-branching.  Branching threads are those that allow more than one message to reply to a prior message.  Non-branching threads are single chains, like a string of pearls, that allow only one message to reply to a prior message.  Many web based message boards are non-branching.  Many email systems and discussion forums are branching.

ThreadMill requires a minimal set of data elements to generate its reports.  A data table must minimally have a column of information for each message that includes the name of the message board, the forum, the thread, and the author, along with a unique identifier for each message and the date and time it was posted.  Optional data elements include the unique identifier of the message being replied to, the URL of the message, and the URL for a profile photo.

All forms of threaded message exchange can be measured.  Simple measures like the count of the number of messages or the number of authors are obvious and useful.  Other measures can be created from more sophisticated analysis.  For example, the network of connections that forms as different authors reply to one another can be extracted and analyzed using network analysis methods.  It is possible to calculate metrics from these networks of reply that describe the location of each person in the graph.

ThreadMill generates several data sets that can be used to create visualizations of the activity and structure of a message board collection.

A Treemap data set can illustrate the hierarchy of encapsulated authors within threads, threads within fora, fora within boards, and boards within collections.  Treemap visualizations of collections of threaded conversations can quickly highlight the most active or populous discussions.

An AuthorLine visualization takes the form of a double histogram, with bubbles representing each thread active in each time period sized by the volume of messages the author contributed, sorted by size.  Threads that have been initiated by the author are represented as bubbles above the center line.  Messages that the author contributes to threads started by other authors are represented as bubbles stacked below the center line.  AuthorLines quickly reveal the pattern of activity an author displays and identifies which of several types of contributors the author is.

A scatter plot visualization represents each author as a bubble in an X-Y space defined by the number of different days the author was active against the average number of messages the author contributes to the threads in which they participate.

A time series line chart reveals the days of maximum and minimum activity along with trends.

A network diagram reveals the overall structure of the discussion space and the people who occupy strategic locations within the network graph.

ThreadMill has received generous assistance from Morningside Analytics.  Bruce Woodson implemented ThreadMill.

October 9-11, 2011: IEEE 2011 Social Computing, Boston: NodeXL Paper on “Group-in-a-box” layouts

This year the IEEE Social Computing conference is being held in Boston, October 9-11, 2011.

The NodeXL team from the Social Media Research Foundation have a paper on our newest layout feature in NodeXL: Group-in-a-box.

Abstract: Communities in social networks emerge from interactions among individuals and can be analyzed through a combination of clustering and graph layout algorithms. These approaches result in 2D or 3D visualizations of clustered graphs, with groups of vertices representing individuals that form a community. However, in many instances the vertices have attributes that divide individuals into distinct categories such as gender, profession, geographic location, and similar. It is often important to investigate what categories of individuals comprise each community and vice-versa, how the community structures associate the individuals from the same category. Currently, there are no effective methods for analyzing both the community structure and the category-based partitions of social graphs. We propose Group-In-a-Box (GIB), a metalayout for clustered graphs that enables multi-faceted analysis of networks. It uses the treemap space filling technique to display each graph cluster or category group within its own box, sized according to the number of vertices therein. GIB optimizes visualization of the network sub-graphs, providing a semantic substrate for category-based and cluster-based partitions of social graphs. We illustrate the application of GIB to multi-faceted analysis of real social networks and discuss desirable properties of GIB using synthetic datasets.

The paper is authored by:

Eduarda Mendes Rodrigues*, Natasa Milic-Frayling†, Marc Smith‡, Ben Shneiderman§, Derek Hansen¶
* Dept. of Informatics Engineering, Faculty of Engineering, University of Porto, Portugal – eduardamr @ acm.org
† Microsoft Research, Cambridge, UK -natasamf @ microsoft.com
‡ Connected Action Consulting Group, Belmont, California, USA – marc @ connectedaction.net
§ Dept. of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA – ben @ cs.umd.edu
¶ College of Information Studies, University of Maryland, College Park, Maryland – dlhansen @ umd.edu

A map of the connections among the people who recently tweeted #SocialCom2011:

[flickr id=”6232130442″ thumbnail=”medium” overlay=”true” size=”large” group=”” align=”none”]

[flickr id=”6232129770″ thumbnail=”medium” overlay=”true” size=”large” group=”” align=”none”]
Connections among the Twitter users who recently tweeted the word #socialcom2011 when queried on October 10, 2011, scaled by numbers of followers (with outliers thresholded). Connections created when users reply, mention or follow one another.

Layout using the “Group Layout” composed of tiled bounded regions. Clusters calculated by the Clauset-Newman-Moore algorithm are also encoded by color.

A larger version of the image is here: www.flickr.com/photos/marc_smith/6232130442/sizes/l/in/ph…

Top most between users:
@danielequercia
@gadgetman4u
@bkeegan
@shaunlawson
@maryheston
@mmiiina
@ronaldomenezes
@theshadowhost
@fergal_reid
@cosleydr

Graph Metric: Value
Graph Type: Directed
Vertices: 36
Unique Edges: 119
Edges With Duplicates: 155
Total Edges: 274
Self-Loops: 105
Connected Components: 2
Single-Vertex Connected Components: 1
Maximum Vertices in a Connected Component: 35
Maximum Edges in a Connected Component: 273
Maximum Geodesic Distance (Diameter): 5
Average Geodesic Distance: 2.174551
Graph Density: 0.107936508
NodeXL Version: 1.0.1.179

More NodeXL network visualizations are here: www.flickr.com/photos/marc_smith/sets/72157622437066929/

The Myth of Selective Sharing: Why all bits will eventually be public (or be destroyed)

One Way

Bits exist along a gradient from private to public.  But in practice they only move in one direction.

Thus, there are two destinies for information: public or oblivion.

Information wants to be copied.

This is not the same as information wanting to be free (or expensive), or information wanting *you* to be free.  Information probably prefers to be free because it may increase the rate at which it is copied, not because it is inherently liberating to the user.  In fact, the “free” quality of some information is probably not liberating at all.  Copying and liberty are orthogonal.

Information diffuses over time: access rights to information can expand over time, but only rarely (ever?) does data become less available, and once available publicly, information is almost never entirely private again.

With enough copies on enough devices, information becomes essentially public. The state of being public may come in degrees, some things are more public than others.  Much information is public in principle but enjoys security by obscurity. Obscurity is eroded by increasing availability of computing resources that make collection and machine analysis affordable at large scales.  The banality of data is no protection.  “No one cares what I think/do/say/click” is not a valid assumption.  In aggregate the banal is data and fuel to many business models.  Maybe no one *cares* what you tweet, click, buy or search for, but many businesses make it their business to aggregate these scattered faint signals and build detailed profiles to drive commerce and customized views of data.

Some information is destroyed, never to be recovered.  This is the only way information can avoid eventually (potentially) becoming public. But less and less data now meets this fate.  Delete is a declining feature of many systems.

Information that is not public and has not yet been destroyed is just waiting to change to either state.

Despite security systems, many private bits are eventually exposed by people passing material to someone else who then accidentally makes them public, or they do so unintentionally themselves by leaving files in publicly accessible locations that are visited by search engine spiders and other web crawlers.  Even professionally managed private data repositories are subject to subsequent distribution, infiltration or error. Data spills are becoming more common. Billions of records are hemorrhaged  into the public regularly.  If well funded organizations cannot secure their information, the rest of us should take note.

It may not be possible for big organizations or any organization to secure their networks, or even do so sufficiently effectively to give users a practical period of privacy, however short.  Eventually private bits, even when encrypted (no matter how well), become public because the march of computing power makes their encryption increasingly trivial to break and their exchange over networks (no mater how well secured) is subject to leaking, intentional and otherwise.  Private bits may only have a “half-life” during which they retain their non-public existence.  The length of this half-life may itself be getting shorter.   Mary Branscome suggests that there could be a physical law in operation: the natural entropy of access control lists?

All bits that persist are destined to be public, and once public never to be private again. Unless they are destroyed.

I argue that the only bits that you cannot find are the ones you need right now. The only bits you cannot get rid of are the ones that are most embarrassing to you right now.  Just because you cannot find the bits you want does not mean that no one else can find those bits.

All your bits are belong to us.

This issue is getting more important as we are invited to use systems that promise selective sharing of data and other tools generate ever more data to potentially share.  Anything that puts your bits into the cloud promises selective sharing.  I believe and hope my much beloved Dropbox account is separate from all the others, except for the one’s I chose to share with. And I think it is, expect for that glitch they had, the details of which elude me (but I think we’re good now, and I so depend on Dropbox I do not know what I would do without it). But all these walls are just made out of a few lines of business logic and an Access Control List. ACLs rule our access to digital objects with an iron fist until they don’t for the many human and technical reasons mentioned.  Like most human infrastructures these selective sharing mechanisms are subject to failure and attack.

Now new sources of data captured from the details of everyday life by sensors and  services are increasingly recorded by external systems and by people themselves, generating new streams of archival material that is richer than all but the most obsessively observed biographies.

Many organizations are adopting social media and creating data sets that can map their internal social network structure as an accidental by-product of their communication practices.  Studying these data sets is a focus of growing interest.  Research projects like SenseCam are now becoming products and existing services like MingleSticks, Poken, FourSquare, and Google Latitude already deliver many of these features. Devices like iPhone and Android phones are weaving location information into every application.

Some steps are still in progress: when my phone notices your phone a new set of mobile social software applications become possible as whole populations capture data about other people as they beacon their identities to one another. Additional sensors will collect ever more medical data with the intent of improving our health and safety, as early adopters in the “Quantified Self” movement make clear.

But the  consequences of data diffusion are becoming difficult to predict.  Social media systems are being linked to one another to enable cascades of events to be triggered from a single message as status updates are passed among Facebook, LinkedIn, Twitter, and blogs.  Tools now automatically aggregate the results of searches and post articles that themselves may trigger other events.  Taking a photo or updating a status message can now set off a series of unpredictable events.

Add potential improvements in audio and facial recognition and a new world of continuous observation and publication emerges.  Some benefits, like those displayed by the Google Flu tracking system, illustrate the potential for insight from aggregated sensor data.  More exploitative applications are also likely.

The result will be lives that are more publicly displayed than ever before.  The collapse of roles (“lowest common denominator culture”) described by Bernie Hogan (listen starting in about 40 minutes – but the entire talk is good and worth a listen) as described by the sociologist Erving Goffman may be one consequence: we are interacting with everyone when we interact with anyone.  Secret shared meanings may still be possible — but selectively shared bits are not, at least not very reliably so in the short term and almost certainly not in the medium term.

Therefore, all services that promote the idea of “selective sharing” are selling a myth.  The more you trust that information you generate can be contained, the more potential there is for an “explosive decompression” as data intended for an individual or a small group becomes suddenly available to a large group or a complete population. Private bits are in a state of high potential energy, always poised to become public.

Engineering is the science, art and practice of containing and directing  forces. Information system engineers might be up to the challenge of delivering selective sharing.  And when combined with law, regulation and social practices, technology could make selective sharing real the way that engineers manage the flow of powerful but dangerous flows of high pressure steam through power plants.  However, recently even high pressure steam engineers working with nuclear fuels have faced some very bad failure conditions beyond their predicted scope.  Information technologists may face analogous issues when managing high pressure containers of selectively shared information.

My policy is not to give up all forms of privacy, I still keep my email and other data behind passwords that I do not (knowingly) share.  I share lots of pictures on flickr but not all of them are public.  I would prefer to keep lots of financial, medical, and personal stuff selectively shared.  I’d like these features to work.

But I have started to understand that my data is likely to be open to others, if not now then some day — and probably sooner than I expect. The net/cloud  holds a good sized and growing  chunk of my digital life and I would like selective sharing features (if I could handle the cognitive tax of managing them).  I just do not believe it is a reasonable expectation.  In a world of increasing interconnection and unifying name and search spaces, data may not be something you can keep local for long.

Tools that suggest that we can reliably segregate content and limit its diffusion are suggesting that water does not roll down hill.  Those who believe that are likely to get wet.