ERC Consolidator Starting January 2025

Delighted to have secured an ERC Consolidator grant for ID-COMPRESSION:

Starting in January 2025, this project will develop and test a framework for understanding how identity is “written in” to social information via social interaction; how this reduces complexity; and thus social information (like attitudes) become a substrate for identity. Of course, this is closely related to familiar social psychology topics explaining how people generate simplified representations of the world — like categorization, stereotyping, heuristics etc. — but is more basic, and provides a fundamental explanation for the social process of simplification and its social functions. Mini-spoiler: the framework treats social interaction as a form of computation, with compressible identity-laden information as a product.

Like DAFINET, it will have an applied maths/network science/information team working closely with a social science team to develop a novel take on social processes.

We’ll be recruiting two Postdocs (probably up to five years) and two funded PhD students in the second half of 2024 for a January 2025 start, so keep an eye on our social media accounts for information and announcements.

Halfway through; but are we halfway there?

It was July 2018 when I blogged the great news about an ERC Starter grant to develop a network theory of attitudes. And then I got busy, I guess, because I didn’t post here again until October 2021. We’re now about halfway through the grant, so I thought this would be a good time to write a bit about what we’ve done, and what we have left to do.

The first thing we did with the money was to start assembling an interdisciplinary team, with people trained in social psychology, mathematics, statistics, physics, and computational modelling. Our aim was to build a supportive and collegial team, and I think we’ve achieved that. Regardless of our successes and failures, we enjoy working together.

Our focus from the start was on evaluating the applicability of the proposed network models to the phenomenon (theory development), and experimentally confirming the proposed psychological mechanisms linking attitudes and social identity (basic human experiments). We’ve made good progress on both of these aims.

The core idea of the proposal is that people are linked by the attitudes they jointly hold, and that attitudes become socially connected when they are jointly held by people.

Attitudes visualized as a bipartite network

From this basic idea, we have developed methods to construct bipartite networks directly from survey-based data to visualize the relationships between people on the basis of shared attitudes. For example, this network is a visualization of vaccine-related attitudes in Bangladesh, from the Wellcome Global Monitor:

People-connected-by-attitudes in the Bangladesh vaccine opinion network

We can also view this social structure from the “other side,” and map connections between attitudes (on the basis of being jointly held by people). This produces a network-view of a set of opinions similar to those generated in belief-network analysis, but with fewer statistical assumptions. Here are the connections between attitudes in the same Bangladeshi vaccine network:

Attitudes-connected-by-people in the Bangladesh vaccine opinion network

An important feature of the method is that we accept that these two views are inseperable features of the same social structure. You can read more about this method in our preprint. (And note that we have realized that we are walking in the footsteps of the sociologist Ronald Breiger who referred to this as “the duality of persons and groups.”)

We have applied this method to empirical attitude data collected in the first wave of the COVID pandemic in the UK. We were able to detect opinion-based groups, track their evolution over time, and show that membership of these dynamic opinion-based groups was associated with group-relevant health behaviours (doi:10.1111/bjso.12396). We think that the method is providing a unique, if incomplete, window on the relation between opinions, group identity, and behaviour.

Dynamic opinion-based groups evolving during first wave of the COVID pandemic in the UK, viewed as people-connected-by-opinions (left) and opinions-connected-by-people (right)

We have done some work exploring the relationship of the method to other cluster-detection methods (e.g. hierarchical cluster analysis and stochastic block models), and found that we get broadly similar results but with the advantage of being able to locate individuals precisely in the structural opinion-space (see our preprint; currently in press at Advances in Complex Systems). The method also lends itself to detecting and quantifying opinion-based (i.e. ideological) polarization; and, as long as there is synchronization on opinions within groups, can even detect polarization without extremism (e.g. when two groups hold moderate opinions and yet never quite agree). 

Perhaps most importantly, it gives us the ability to produce very cool visualisations :D!

In our proposal, we said we would develop agent based models (ABM’s) implementing this type of social structure.  With the help of experts who attended our project-launch workshop, we realized that Axelrod’s classic model of cultural diffusion natively relies on a bipartite network structure of people connected by attitudes. We extended this model, introducing a multidimensional equivalent of an agreement threshold, and found that the model generates plausible clusters similar to those observed in real data (doi:10.1371/journal.pone.0233995). We have shown that this model (where network edges represent similarity in opinions), is relatively impervious to underlying social network topology (e.g. friendship links) and we can therefore use this agent-based model to simulate opinion dynamics in realistic social systems even when we don’t know the social connections between people (doi:10.1016/j.physa.2021.126086). Our next step in this line of research is to see whether we can use the agent-based model to say anything useful about identity and opinion dynamics in real social systems (initially observed via surveys).

However, while the research above is very promising, it doesn’t tell us much about whether people actually perceive the world in this way, or whether the social structures we observe using these methods make any difference to how people socially identify or act in the world.

To examine these questions, we developed a new method that allows us to extract and illustrate belief networks among different groups in what we are calling an “attitude-space” (see our preprint). The graphic below shows a partisan attitude space of different political partisan groups in the US, based on participants’ item responses. To test whether attitudes can serve as marker of social identities, we correlated participants’ item responses with their self-reported political identification that the two evident attitude clusters map to Democrat´s and Republican´s attitude positions. In a second step, we exposed participants to other people´s attitude positions to see how the expression of attitudes by others relates to social perceptions. Our results showed that the extent of distance between one´s own attitude position in a network and the position of an attitude expressed by another person correlated reliably with how people socially categorize and emotionally perceive others (manuscript in preparation). We believe that this approach offers a novel view on the complex interplay between attitudes and polarization, both affective and ideological.

Top: An extracted “Attitude Space” based on participants responses to 8 political items taken from the ANES survey. The network shows the correlation of responses with strong disagreement (dark blue), modest disagreement (pale blue), neutral (grey), modest agreement (orange), and strong agreement (red); Above: The same network, showing participants’ self-categorization as democrat (blue) and republican (red).

While correlational insights like these help us understand how attitudes translate into social identity phenomena, a main focus of our current work is on social experiments that test the causal hypotheses on attitude-identity dynamics offered by our model.  

To test the psychological basis of opinion-based group identification, we wanted to develop a version of the minimal groups paradigm where people have the chance to agree and disagree with collaborators on novel statements that they’ve never heard before. This should allow us to test whether (a) joint agreement on attitudes promotes a sense of ingroup identification (over and above the identification from simply belonging to a minimal group) and (b) whether opinions that are incorporated into these minimal group identities are more strongly held as a consequence. To do so, we have developed a method for generating novel attitude statements — attitudes to which people have not yet been exposed and are unlikely to have a pre-existing opinion on (for example, “a circle is a noble shape”). We have piloted a selection of these, and now have a battery of novel opinions (or “Attitude sets”) to deploy in our experiments.

Despite Covid-related delays to our virtual interaction experiments (which previously relied on people participating together in a lab), we have run three experiments with human participants. These demonstrate that:

(1) Sharing novel opinions (and more specifically, expressing agreement on such opinions) results in people experiencing a sense of shared group identity.

(2) The sense of identification produced in opinion-based groups is stronger than that produced by the conditions in classical “minimal group” conditions where groups are differentiated on arbitrary dimensions (like preference for one painting over another).

(3) People come to have more certainty about attitudes that are associated with an emerging group identity.

We have also started to explore the role of attitudes in social media behaviour. We have identified a portfolio of real political bots, using available tools like the botometer, which we use to generate materials for experiments. In one of our ongoing projects, we expose participants to different bot and human accounts, and test whether attitude (dis)agreement between people´s own opinions and those held by an account profile can predict misperceptions of bot accounts as real users and vice versa. In other words, are people more willing to accept accounts as human if their posts are ideologically aligned with the participant’s own identity? Based on these emerging results we will design further studies to trial “micro interventions” that may help combat the spread of misinformation through online networks.

Examples of republican (left) and democrat (right) bots from our material portfolio.

In the meantime, the VIAPPL software platform has been updated to allow a novel network game where participants exchange opinions and we will be piloting this in the next months. The platform offers the unique opportunity to study social identity effects in experimentally controlled social interactions settings, hence providing a maximum of internal validity. While the key focus of the VIAPPL research avenue will lie on the understanding of group formation through attitude alignment, it offers plenty of opportunities for other factors to explore. Manipulations of audiences (e.g. based on attitude homogeneity vs. divergence similar to social network sites), motivational states (e.g. zero-sum games, threats), or different attitude formats (e.g. “neutral” attitudes, socio-political attitudes) are just a few ideas to name for the beginning as possibilities are expanding through internal and external input from our team as well as from our network of collaborators.

So, are we halfway there? overall, our results exceed the expectations I had for the halfway mark when I wrote the proposal. There are some areas where we are lagging behind, but others where we’ve already gone well beyond what we could imagine at the start. We’re feeling excited about how these ideas can be applied and extended well beyond the scope of this grant.

We’re currently (October 2021) recruiting a new member of our team, and looking forward to seeing what this person brings to this programme of research! While we hope that people are excited by the work we’re doing, we are also aware that the complexity can be quite overwhelming to start with. Multidisciplinary work is very exciting, but we should be clear that no single person in the group understands every feature of the research we’re doing. The social psychologists lean heavily on the mathematicians/statisticians; and they rely on us for linking their models to social theory. In fact, that’s the thing I’ve enjoyed the most about the first half of this ERC grant – the great privilege of being able to work on ideas much bigger than I could have tackled alone.  

[Attribution: Thanks to Adrian Lüders for additions to the section on the social experiments; and some great images from his current studies. All of the research reported here is a team effort! Please see references below for authorship. Disclaimer: some of the text is repeated in the ERC mid-term scientific report, which you will find on the European Commission Cordis website. For more information on the studies reported above, see the publication & impact page on the DAFINET website.]

References:

Carpentras, D., Lueders, A., & Quayle, M. (2021). A method for exploring attitude systems by combining belief network analysis and item response theory(Resin) [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/uzdcg

Dinkelberg, A., MacCarron, P., Maher, P. J., & Quayle, M. (2021). Homophily dynamics outweigh network topology in an extended Axelrod’s Cultural Dissemination Model. Physica A: Statistical Mechanics and Its Applications, 578, 126086. https://doi.org/10.1016/j.physa.2021.126086

Dinkelberg, A., O’Sullivan, D., Quayle, M., & MacCarron, P. (2021). Detecting opinion-based groups and polarisation in survey-based attitude networks and estimating question relevance. ArXiv:2104.14427 [Physics]. http://arxiv.org/abs/2104.14427

MacCarron, P., Maher, P. J., Fennell, S., Burke, K., Gleeson, J. P., Durrheim, K., & Quayle, M. (2020). Agreement threshold on Axelrod’s model of cultural dissemination. PLOS ONE, 15(6), e0233995. https://doi.org/10.1371/journal.pone.0233995

MacCarron, P., Maher, P. J., & Quayle, M. (2020). Identifying opinion-based groups from survey data: A bipartite network approach. ArXiv:2012.11392 [Physics]. http://arxiv.org/abs/2012.11392

Maher, P. J., MacCarron, P., & Quayle, M. (2020). Mapping public health responses with attitude networks: The emergence of opinion‐based groups in the UK’s early COVID‐19 response phase. British Journal of Social Psychology, 59(3), 641–652. https://doi.org/10.1111/bjso.12396

ERC starter grant to develop a network theory of attitudes

Delighted to announce an ERC starter grant to develop a network theory of attitudes. Here’s the abstract:

Understanding the coordination of attitudes in societies is vitally important for many disciplines and global social challenges. Network opinion dynamics are poorly understood, especially in hybrid networks where automated (bot) agents seek to influence economic or political processes (e.g. USA: Trump vs Clinton; UK: Brexit). A dynamic fixing theory of attitudes is proposed, premised on three features of attitudes demonstrated in ethnomethodology and social psychology; that people: 1) simultaneously hold a repertoire of multiple (sometimes ambivalent) attitudes, 2) express attitudes to enact social identity; and 3) are accountable for attitude expression in interaction. It is proposed that interactions between agents generate symbolic links between attitudes with the emergent social-symbolic structure generating perceived ingroup similarity and outgroup difference in a multilayer network. Thus attitudes can become dynamically fixed when constellations of attitudes are locked-in to identities via multilayer networks of attitude agreement and disagreement; a process intensified by conflict, threat or zero-sum partisan processes (e.g. elections/referenda). Agent-based simulations will validate the theory and explore the hypothesized channels of bot influence. Network experiments with human and hybrid networks will test theoretically derived hypotheses. Observational network studies will assess model fit using historical Twitter data. Results will provide a social-psychological-network theory for attitude dynamics and vulnerability to computational propaganda in hybrid networks.

The theory will explain:

(a) when and how consensus can propagate rapidly through networks (since identity processes fix attitudes already contained within repertoires);

(b) limits of identity-related attitude propagation (since attitudes outside of repertoires will not be easily adopted); and

(c) how attitudes can often ‘roll back’ after events (since contextual changes ‘unfix’ attitudes).

 

The proposed project capitalizes on multi-disciplinary advances in attitudes, identity and network science to develop the theory of Dynamic Fixing of Attitudes In NETworks (DAFINET).

Specifically, DAFINET integrates advances in identity research from social psychology, models of attitudes from ethnomethodology (a branch of sociology), and multilayer network modelling from network science to propose a novel theory of opinion dynamics and social influence in networks. DAFINET will have impact in a broad range of disciplines where attitude propagation and social influence is of concern, including in economics, sociology, social psychology, marketing, political science, health behaviour, environmental science, and many others.

Attempt at network stress testing in R

I’ve been asked by reviewers to stress test two networks following Jeong &  Barabási (2000). Critically the reviewers asked for an exploration of how network diameter changed as progressively larger numbers of nodes were randomly dropped from the networks.

Although the netboot library makes it trivial to do a case-drop bootstrap on a network, it reports a limited set of network statistics and diameter is not one of them.

Here’s an attempt to run a stress test on network diameter for a small (1000 node) randomly generated ring network. I’m sure there are more efficient ways of doing this, and I’m concerned that the algorithm might struggle with the large real-world networks I’ll be applying it to, but I’m proud of the pretty output for now:

library(pacman)
library(tidyverse)
library(tidygraph)
library(igraph)
library(ggplot2)

#Function graphdropstats accepts graph object and number of cases to drop
#drops ndrop cases(vertices) (using uniform random distribution to identify nodes to drop)
#then returns statistic on subgraph, in this case diameter
# V(graph) gives list of nodes in graph
# vcount(graph) gives number of vertices, but more efficient to get this from length of V(graph)

graphdropstats <- function(graph,ndrop){
 keepnodes<-V(graph) #vector of vertex ID's in graph
 droplist<-sample(as_ids(keepnodes),ndrop)
 keepnodes<-keepnodes[-droplist] #vector of positions in keepnodes to drop
 samplegraph<-induced_subgraph(graph,keepnodes)
 return(diameter(samplegraph))
}

#generate graph for testing
graph1<-create_ring(1000)

## sampling with nreps replications, dropping ndrop nodes at random and saving statistics; 
## and incrementing ndrop each time until ndropstop
nreps=100
ndropstop=100
ndrop=1
allresults<-vector("numeric", nreps)

for (ndrop in 1:ndropstop){
 result<-vector("numeric", nreps)
 for (i in 1:nreps) {
 result[i] <- graphdropstats(graph1,ndrop)
 }
if(ndrop > 1) allresults<-rbind(allresults,result)
}

allresults <- allresults[-1,] #drop first row of matrix which otherwise is blank
#allresults 
matplot(allresults, type='p', pch=15, col=c("gray70"),xlab="N vertices dropped at random", ylab="Network diameter")
index<-1:(ndropstop-1)
lines(index, rowMeans(allresults), col = "red", lwd = 2)

#Edit 27/3/2018: bugfix

This gives us this plot:

…. which is pretty much what I’m looking for.  It shows, as expected, that ring networks are highly vulnerable to node dropout. Compare to a 1000 node scale-free network:

Fingers crossed that it’s efficient enough to run on large co-authorship networks!

 

  • [DOI] Albert, R., Jeong, H., & Barabási, A.. (2000). Error and attack tolerance of complex networks. Nature, 406(6794), 378–382.
    [Bibtex]
    @article{Jeong2000, title={Error and attack tolerance of complex networks}, volume={406}, url={http://dx.doi.org/10.1038/35019019}, DOI={10.1038/35019019}, number={6794}, journal={Nature}, publisher={Springer Nature}, author={Albert, Réka and Jeong, Hawoong and Barabási, Albert-László}, year={2000}, month={Jul}, pages={378–382}}

 

Transcription nirvana? Automatic transcription with R & Google Speech API

For as long as I’ve been doing qualitative analysis I’ve been looking for ways to automate transcription. When I was doing my masters I spent more time (fruitlessly) looking for technical solutions than actually doing transcription. Speech recognition has come a a long way since then; perhaps it’s time to try again?

I came across a blog post recently that suggested it’s becoming possible using the Google Speech API. This is the same deep-learning model that powers Android speech recognition, so it seems promising.

After setting up a GCloud account (currently with $300 free credit; not sure how long that will last) installing the R libraries and running some text is simple:

#install package; run first time or to update package....
#devtools::install_github("ropensci/googleLanguageR")
library(googleLanguageR)

Once you’ve authorized with GCloud (a single line of code) the transcription itself requires a single command:

gl_speech("path to audio clip")

I tested it with a really challenging task: a 15 second clip of the Fermanagh Rose from the 2017 Rose of Tralee:

Then run the transcription:

audioclip <- "<<path to audio file>>"
testresult<-gl_speech(audioclip, encoding = "FLAC", sampleRateHertz = 22050, languageCode = "en-IE", maxAlternatives = 2L, profanityFilter = FALSE, speechContexts = NULL, asynch = FALSE)
testresult

Which spat out :

 startTime endTime word
1 0s 1.500s things
2 1.500s 1.600s are
3 1.600s 2.600s boyfriend
4 2.600s 2.700s and
5 2.700s 3.200s see
6 3.200s 3.600s uncle
7 3.600s 7.100s supposed
8 7.100s 7.300s to
9 7.300s 7.400s be
10 7.400s 7.500s on
11 7.500s 12.200s something
12 12.200s 12.700s instead
13 12.700s 13s so
14 13s 14.600s Big
15 14.600s 14.900s Brother
16 14.900s 15.400s big
17 15.400s 15.800s buzz
18 15.800s 16.300s around
19 16.300s 17.300s Broad
20 17.300s 17.600s range
21 17.600s 17.900s at
22 17.900s 24.300s Loughborough
23 24.300s 24.700s bank
24 24.700s 25.100s whereabouts
25 25.100s 25.100s in
26 25.100s 25.600s Fermanagh
27 25.600s 27.700s between
28 27.700s 28.300s Fermanagh
29 28.300s 28.800s Cavan
30 28.800s 29.700s and
31 29.700s 29.800s I
32 29.800s 30.100s live
33 30.100s 30.400s action
34 30.400s 30.600s the
35 30.600s 30.900s road
36 30.900s 31.500s on
37 31.500s 31.800s for
38 31.800s 32s the
39 32s 32.200s Marble
40 32.200s 32.300s Arch
41 32.300s 32.400s Caves
42 32.400s 33.800s and
43 33.800s 34.400s popular
44 34.400s 34.600s culture

Honestly, that’s not bad — although not quite useable. It’s certainly a good base to start transcribing from. I was not expecting it do deal so well with fast speech and regional dialects. Perhaps transcription nirvana will arrive soon; not quite here yet, but quite astonishing that such powerful  language processing is so easily accomplished.

An intriguing computer-based metaphor for culture

Psychologists have exploited computers as metaphors for the human brain ever since their invention. Concepts like “short term memory” and “long term memory” as functional cognitive units that pass information from one to another owe their provenance to computer metaphors.

These metaphors, however, are based on particular technical instantiations of computing; there are unimaginably many ways to instantiate computers as technological objects, including in DNA, slime, and liquid crystal. Even the cloud based systems powering technology experiences today are radically different from the self-contained computing units that spawned the computer-based metaphors at the heart of cognitive psychology. For example,  web-pages hardly ever exist on a single server anymore. When called they are constructed on-the-fly from databases and servers with the illusion of being a unitary object. This very webpage was constructed with 93 calls to four domains; each of those calls would have been served by a server accessing multiple databases in order to fulfill the request. A simple blog page is constructed on-the-fly by literally hundreds of processes hosted on multiple servers.

The information-processing metaphor of the human brain is based on the standalone serial computer; and in practice those barely exist anymore. New forms of computing, like “cloud-computing”, radically disrupt these metaphors.

pingfs (ping file system) is a file storage  system that stores data in the internet itself, as packets bouncing between routers in a network. As a  packet is received it is bounced back as a new packet. No local storage exists beyond that required to read the message, bounce it and instantaneously delete the local copy. The data is “stored” primarily between nodes, not within them; like storing tennis balls by juggling them.

This seems like a far better metaphor for memory than the “short term memory”[RAM]/”long term memory”[Hard-drive] distinction. It captures the social nature of memory, and how individuals primarily remember things they are reminded of.

But as a metaphor for social life and memory it could be improved. What if  nodes in the network selectively bounced packets based on agreement and disagreement? What if packets were subtly changed each time they bounced? This would start to approximate a metaphor for culture, and capture how information is simultaneously transmitted and stored; that the act of transmission is also a mechanism of storage.

This metaphor starts to capture some of the magic of cultural memory; moving the locus of action from the inside of individual brains to the spaces between people, as post-structural theorists have long suggested.  Culture, according to this metaphor, is produced and maintained by the constant flurry of interaction between its members. It is what happens between people, not within people, that creates memory.  Obviously, this is only possible if the people have the capacity to “bounce packets” of information in appropriate ways, but it is a metaphor that highlights that meaning and memory cannot be made alone.

 

 

 

 

 

Social network structure & collective cooperation

 

In social psychology we’re interested in how group identity and group processes impact on individual experience and behaviour. Until now the field has focused largely on how people perceive groups and identity; and has not worried too much about the structure of social connections. Network structure , however, makes a big difference to social outcomes at collective levels and we’re now getting tools and models to start to make sense of it all.

Allen and colleagues (2017) have recently shown that cooperation is more likely to emerge in networks with fewer but stronger ties at local levels than in networks with more (but weaker) connections. This is theoretically exciting, as it shows that it is possible and fruitful to analyze social psychological constructs in relation to network structure.

It’s also deeply concerning, since the digital platforms that mediate more and more of our social relationships (Twitter; Facebook; Instagram) are cultivating social networks with large numbers of weak ties — exactly the kinds of relationships that, according to Allen et al., will result in less cooperative networks at large scales.

Counterintuitively, if we want more cooperative societies we might need to spend less time on our phones and see fewer people more often.

  • [DOI] Allen, B., Lippner, G., Chen, Y., Fotouhi, B., Momeni, N., Yau, S., & Nowak, M. A.. (2017). Evolutionary dynamics on any population structure. Nature, 544(7649), 227–230.
    [Bibtex]
    @Article{Allen2017,
    author = {Benjamin Allen and Gabor Lippner and Yu-Ting Chen and Babak Fotouhi and Naghmeh Momeni and Shing-Tung Yau and Martin A. Nowak},
    title = {Evolutionary dynamics on any population structure},
    journal = {Nature},
    year = {2017},
    volume = {544},
    number = {7649},
    pages = {227--230},
    month = {mar},
    doi = {10.1038/nature21723},
    file = {:Allen2017 - Evolutionary dynamics on any population structure:},
    owner = {MQ},
    publisher = {Springer Nature},
    timestamp = {2017-09-07},
    }

 

 

 

 

Managing my publication page on WordPress with Papercite

This is a very exciting find: a way to automatically generate a publications page on a WordPress blog from a bibtex file.

I’ve used Jabref to manage my own publication record for years now. Papercite pulls the most recent version of the Jabref database (a bibtex file) via a Dropbox link and automatically generates my publication page (see it in action here).  Here’s the script in the WordPress page that does the work:

{bibtex  highlight=”Michael Quayle|M. Quayle|Mike Quayle|Quayle|Quayle M.” template=av-bibtex format = APA  show_links=1  process_titles=1 group=year group_order=desc file=https://www.dropbox.com/s/2ol9lo2rh52bo6c/1.MQPublications.bib?dl=1}

(Note: I’ve replaced the square brackets with curly braces so that the publications page doesn’t render in this post about the publications page; the curly brackets above need to be square brackets in order for the script to run.)

Now, when I update my bibtex record with new publications (which I would be doing anyway) my publications page automatically shows the most recent updates.

Fingers-crossed that this continues to work when Dropbox changes its web-rendering policy in September….