All networks can be represented solely in terms of the connections between their elements, assuming that whatever combination of factors making people more or less likely to associate with each other is accounted for by the distribution of those associations that actually form. … The likelihood of a new connection being created is determined, to some variable extent, by the already existing patterns of connections. (Watts, 1999)

Because the actual functions governing the when, where, and amount of information transmission are so complex and context dependent, there are a great number of dimensions for conceptualization, experimentation, and analysis. I’ve already mentioned some previous work involving the effects of status and trust on transmission. There are numerous other possibilities to examine: power, transactional exchanges, sex differences, etc. As the problem is so multidimensional, it is difficult to ascertain which perspective would provide the most informative perspective from which to view the data. And of course which variables are relevant also depends on what scale the phenomena are examined and explanation is desired.

To me it seems most effective to place less emphasis on the standard sociological variables like class, sex, wealth, age, race, profession, power, high-culture/pop-culture, eastern/western, trust, or physical attractiveness, and focus simply on contact and interaction frequency. My intuition is that, assuming culture to be generally a phenomena of socially mediated information exchange, any understanding of the dynamics of social interaction will shed some light on the dynamics of information and culture as well. As in every other case where a highly-dimensional problem is looked at through a low dimensional lens, focusing on contact means that many interesting variables will be lumped and collapsed together. But the advantage of focusing on interaction is that many of the other variables can be re-expressed as actor- or attribute-specific biases away from the central tendencies or patterns of interaction. And all of the unexplained or as yet undiscovered parameters can be included by allowing for noise in a model.

Locating and tracking the instances where interaction contacts occur would obviously be difficult, but not quite as unimaginable as tracking information itself. One question that immediately crops up is how to decide what degree of interaction constitutes a contact. What kinds of unintentional communication should be considered? What level of involvement is required of participants? For the time being, it makes sense to use a very broad definition. I’m using the term “contact” to mean that a person, a representation of the person, or some symbolically coded representation of the person’s communication, has become perceptually available to another person. In this sense, the probabilities of contact and interaction are relatively synonymous: both could be described as the likelihood that specific individuals will “bump into each other” in a given time period.

Whenever there is contact, there is a probability that information exchange of some sort and degree will occur between the participants. As the participants continue to interact, information items which they have picked up in an earlier contact my be spread to their future interactants. I have argued that this may lead to increased similarity on some hypothetical metric for culture and meaning. Each specific information “item” or message can have its own transmission probability. This probability is generally not equal to one, because it is always possible that the information will be disregarded, or that other information necessary for its comprehension or retention has not yet been received, so transmission effectively fails to occur. Transmission probabilities will, of course, be related to contact probabilities, but I’m defining them as based on a unit of time rather than an instance of contact.

For each level of analysis, the collected set of probabilities relating a group of individuals can be described as a network, and it is possible to relate the networks. From the perspective of the agent or actor, a network would describe the frequency of interactions with others and the structure of interaction groups. From the perspective of the “culture of the group”, the network would describe the probability of exchanges which lead to increased or decreased similarity. From the perspective of a “unit of information”, the network would describe the distribution of chances for transmission. I think that it is worthwhile to consider some kind of crude explicit formal model which can be used as a null case or conceptual reference point on which to elaborate later. The simplest baseline model I’ve come up with is one centered around the idea of a fixed pattern of interaction. That is, I’m initially viewing individuals as automatons with no will or intelligence, each of whom probabilistically follows her own “preset” pattern of contact relationships. The resulting network acts as a substrate for transmission, and information flowing through it approaches probability distributions related to the lengths of various possible interaction “chains” between actors.

The focus of analysis can be narrowed down to consider the transmission probabilities for one item of information in a population of individuals. If the probability of direct transmission of the item between individuals i and j in a given time period is indicated as **Pt(i~j)** and transmission between j and k as **Pt(j~k)**, then:

**Pt(i~k) = Pt(i~j)*Pt(j~k)**

In other words, the probability of transmission of an item to an individual two steps away is equal to the product of the probabilities of each individual step. Of course there may **Pt(i~l), Pt(l~m), Pt(m~k) **such that:

**Pt(i~k) = Pt(i~l)*Pt(l~m)*Pt(m~k) + Pt(i~j)*Pt(j~k) **

Since this is a probabilistic framework, there is always a chance (perhaps vanishingly small) of transmission occurring via some roundabout path within the finite time period under consideration. This means that in order to determine an exact solution for the transmission probability, it would be necessary to compute **Psum(i~k)**, the sum of the transmission probabilities of all possible chains connecting **i** and **k**.

Initially it appears that the process would be similar to a Markov transition matrix and the transmission probability distribution could be calculated by taking very large powers of the matrix. However, there are several properties of information transmission which appear to violate the assumptions necessary for a Markov process. For one, information is not conservative: when it is transmitted from **i** to **j**, **i** doesn’t usually lose the information. Second, there is generally a probability of **i** forgetting an item (an absorbing state in Markov terms), and also a probability of **j** “figuring it out” or discovering it independently without being directly informed (a generative state). Third, there is the question of what happens if an individual receives an item a second time. Does the confirmation of the initial message increase the probability of telling others, or will the probabilities decrease because the individual assumes that the message is now common knowledge?

Probably the answer depends on the kind of message in question. “Cultural” elements may behave in one fashion and gossip in another. This is a considerable obstacle for any attempt to construct a unifying model. Whether or not the model is analytically tractable depends on the matrix properties of the model – whether or not rows and columns sum to unity, the existence of disconnected components, etc. There are several features which would be expected in a “realistic” model. The initial behavior should be highly dependent on the starting conditions, but the long term behavior ought to be relatively stable, with the distribution approaching some limiting configuration. Ideally, the sum-of-all paths problem for the matrix could be solved analytically, and the resulting matrix could be multiplied by the vector of in-the-know individuals, to yield a vector giving the probabilities of who is likely to know it in the next time step. Even if the exact solution cannot be determined analytically, it may be possible to arrive at a numerical approximation with an iterative calculation, taking advantage of the fact that the transmission probabilities for very long chains becoming vanishingly small, so the effects could perhaps be truncated after a certain range. Alternatively, an approximation could be calculated as a stochastic process by using the probabilities in the contact network to simulate a large number of interactions in order to observe the resulting distribution of the message.

There are many factors which would seem to make this formulation relatively impractical for more than toy models. One of the main points I’ve been trying to make in this paper is that the contact and transmission probabilities between pairs are NOT static, but are in fact quite time and context dependent. How would the distributions be calculated if there are multiple items in the system simultaneously? Furthermore, many of the bias effects described by Campbell (1958) could result in conditional probability relations between items, meaning that receiving one item could alter the probability of successfully receiving another, and the probability sums would become far more complex. **Pt(i~j)** is intended to represent the pair-wise transmission probability for one “item” given its associated context networks. As the context networks for different items are not likely to be the same, it is difficult to imagine how one might go about empirically determining values for all of the **Pt(i~j)**‘s in a network.

It seems worthwhile to “jump up” a level and consider again only the network of contact probabilities rather than transmission probabilities. I suggested earlier that the pair-wise contact probabilities, **Pc(i~j)** are conceptually equivalent to a sum of all the various structural “driving” networks, and that they are at least hypothetically observable. The two kinds of probabilities could be related by considering contact to be an upper bound on transmission:

**Pt(i~j) > Pt(i~j)**

This means that if sufficiently detailed data on the interaction between group members could be collected, these values of **Pc(i~j)** could be used to estimate a “maximum” probable distribution of information, and, as much as the network is stable, project a short distance into the future.

Another possible way of looking at the temporal evolution of the problem is to make the (unreasonable) assumption that pair probabilities are relatively static and ignore the contributions of the other possible paths in the network. Then, since **Pt(i~j)** is the probability of transmission in one time step, the probability of transmission having occurred after n steps would be:

**1-(1-Pt(i~j))^n**

Although **Pt(i~j)** originally represented probability of transmission in unit time, it could be reconceptualized as **P’t(i~j)**, where one interaction “generated” by **Pc(i~j)** transmission various nodes could be calculated according to **Pc(i~j)*(1-(1-Pt(i~j))^n)** Further analytical work is needed to determine whether this equation would be at all meaningful when the whole transmission matrix is considered simultaneously. In other words, does **1-(1-Psum(i~j))^n** accurately describe the probability of transmission between a pair in a matrix after n time steps, or do the probabilities need to be summed in alternative fashion? But again, in many “natural” social situations it seems **Pc(i~j)** would be fairly unstable. These changes are brought about by all of the unknown “group dynamics” effects, including some of the culture and information effects discussed earlier.

If some of the generalized patterns of group dynamics could be expressed in this sort of probabilistic network terminology, it might be possible to code them into a theoretical model to make possible qualitative estimates of future states of contact distribution for a network, and the contact distribution could be used to obtain an impression of information distributions. This is essentially the approach taken by Snijders and his collegues in their “Stochastic Actor Oriented Networks Models” (Snijders & Marijtje 1997, Snijders 2001a) but their emphasis is more on estimating the magnitude of various network functions in observational data collected at several time points. However, because their approach is to run the models as a simulation and quantitatively compare the resulting networks to observed social networks, their algorithms can also be used as a generalized model for social network construction and dynamics.

Skyrms and Pemantle (2000) examine the dynamics of a very simple model of the formation of social interaction networks. In their simplest setup, agents start out with equally weighted probabilities of “visiting” each other and these weights are incremented after each successful visit. Even this simple model is sufficient to generate some degree of “structure” in the relations as initial random perturbations are magnified through later interactions. One of their slightly more complex models incremented the interactions weights for both the visitor and visited. This is in many ways analogous to a model that could be created by assuming that cultural exchange between the actors leads to increased similarity and future interaction. Later developments of Skyrms and Pemantle’s models add discounting of past interactions, noise, or game-theoretic interactions between the actors. Not surprisingly, the results are difficult to summarize succinctly. However, one result was that in many situations in which the limiting configuration was empirically derivable, the networks often took a great deal of time to converge. And the addition of discounting and noise generally resulted in the network’s eventual convergence on sets of dyads, or occasionally stars.

An interesting question which was not within the scope of Skyrms and Pemantle’s (2000) paper is what happens when the interactions take place in sets larger than dyads. This is partially a question of the level of analysis. Although it may be possible, by looking at extremely fine time scales, to break apart any relations between groups into sets of dyadic interactions, it seems that some situations might be better described as sets of simultaneous relations. When a lecturer addresses a crowd, for example, everyone may be receiving roughly the same information or degree of contact. Or when groups of friends meet and participate in joint interactions – one common dinner table conversation – the “bond” that is reinforced may be among all of the members of the group as whole, and might not apply in the same fashion to individual pairings in other contexts. In many social media (email and phones perhaps being an exception) interaction between pairs of individuals are generally not isolated and independent. Interactions are often visible to others individuals16, and can occur simultaneously. Individuals often act to “catalyze” interaction between mutually unacquainted friends- either intentionally through social events, or coincidentally due to overlapping visits.

This is actually part of the discussion of the small world problem. To what degree is the likelihood of two individuals meeting determined by the number of friends they have in common? Jin and Newman (2000) develop an interaction based friendship network dynamics model which allows for transitive friendship effects, as well as saturation effects due to the assumption that a person has only a finite amount of time available for maintaining relationships. (A person cannot have more than a certain number of friends and acquaintances) In the most basic version of their model, actors were selected randomly to interact with a probability proportionate to the length of time since their last interaction. Initial analysis of the model shows that it is quite capable of generating networks with properties (clustering coefficient and characteristic path length) similar to observed networks.

If modeling is intended to represent process which could occur in actual social networks, it seems important to include, as these models do, the possibility of noise and randomness. In the real world, even if two individuals are not connected (or perhaps connected by very long acquaintance chains) there is still a small probability of them interacting through happenstance – non-socially mediated random encounters. The chance of this kind of encounter could be fairly high in some situations – in a interactionaly dense and spatially overlapping setting like a college campus, for example. Also, this noise term can serve as catch-all for all the non-correlated parameters which are not included in the model. Noise in a model gives the connections some freedom to “random walk” so that the network would be gradually but continuously shifting. The interplay between noise and structural effects can be complex. Random connection changes could establish links to isolates and to sections of the network which were disconnected in the initial condition, but if random effects are very large, they will likely wipe out most of the structure and result in a fairly homogenous network. (Jin & Newmen 2000)

It is probably important to stop and rephrase this discussion in more social terms. The kind of model I’m interested in investigating describes a situation in which a group of actors start out with some initial distribution of contact frequencies. They all go around looking for people to bump into, based on who they remember interacting with before, with the addition of some slight bias due to demographic variables and spatial effects. When one or more people interact (overlapping bumps), they all remember the interaction and forget all their previous interactions a little bit. If one group ends up bumping a lot, they may remember each other so much that they forget the people they used to bump into. “History” or “memory” parameters control how forgetful people are, and the randomness parameter controls how often they “randomly” bump into each other.17

This model can be extended in several ways to make it more realistic. For one, it is important to consider the initial setup of the probabilities at time zero. A fair amount of the dynamics may depend on this initial configuration. This initial matrix can be viewed as coding in some of the structural and attractive parameters of the actors in the network based on the structural relations between them due to the environment, role embeddedness, etc. For example, at Bennington College it may be that individuals who share a house or are in the same course have much higher interaction probabilities simply because they often occupy the same physical space. A constant value could be added to the weights of the connections between individual in the same house. If charisma and sociability are shown to have an effect on tie generation, a value could be added to the probability of forming connections to the popular people. In Snijders’ model, this was accounted for by allowing the attributes of each actor to impact their “network attractiveness”. (Snijders 2000a) It seems consistent with my personal experience that individuals preferentially associate with others who are “culturally similar” to themselves. If actors participating in an interaction do transmit and exchange “cultural material” or information, it is likely that they would leave the encounter in some way “more similar” than they were before-hand, and that this might mean an increased likelihood of future interaction. So information transmission would “feed back” into the network formation process. But it is difficult to say what the magnitude of this effect is in comparison with, or how it is related to, all of the many complex “psychological” variables of attraction or strategic choices on the part of actors. A crucial step towards becoming clearer about what kinds of models are appropriate would be to get more information on the processes as they occur in the real world. If high quality data could be simultaneously collected on the network dynamics, information transmission, and demographic variables of a large community, it could go a long way toward answering some of these questions.

**notes**

17 When I say that individuals or networks “remember” or “forget,” I do not mean that they forget a person the same way one might forget a phone number. Each person does not neeed to explicitly rembember a larger number of past states. The type of memory I’m referring to is more along the lines of the dynamical systems concept of the number of states back in time which must be taken into account in predicting the next state. In other words, if where I sit in the dinning hall is completely random, there is no memory effect from where I sat in previous meals. If where I sit today is strongly influenced by where I sat yesterday, then there is some degree of memory in my seating choices. If where I sit today is influenced by where I sat yesterday as well as where I have been sitting for the last four years, then the system has a strong memory.