Acknowledgements

Chapter 1. Introduction: Qu'y puis-je ?

Chapter 2. Research context: Locating this study in the existing literature

Chapter 3. Methodology

Chapter 4. Learning from our failures: Lessons from FairCoop

Chapter 5. Different ways of being and relating: The Deep Adaptation Forum

Chapter 6. Towards new mistakes

Chapter 7. Conclusion

______________

Annex 3.1 Participant Information Sheets

Annex 3.2 FairCoop Research Process

Annex 3.3 Using the Wenger-Trayner Evaluation Framework in DAF

Annex 4.1 A brief timeline of FairCoop

Annex 5.1 DAF Effect Data Indicators

Annex 5.2 DAF Value-Creation Stories

Annex 5.3 Case Study: The DAF Diversity and Decolonising Circle

Annex 5.4 Participants’ aspirations in DAF social learning spaces

Annex 5.5 Case Study: The DAF Research Team

Annex 5.6 RT Research Stream: Framing And Reframing Our Aspirations And Uncertainties

References

This annex provides a detailed account of the FairCoop (FC) research process, which unfolded between June 2020 and October 2021.

1 First AR cycles: Deciding on a methodology

As I mention in Chapter 3, my original intention was to invite participants in FC projects to form a participatory research group, which would explore the kinds of social learning taking place within FC.

Following a few preliminary conversations, it appeared that this intention didn’t find much of an echo among the persons I approached. The main reason they invoked was that the community was “dormant” or even “a failed experiment.” I also learned that it was (or had been) deeply divided as a result of intractable conflict.

Therefore, I shifted my approach to a diagnostic/evaluation stance. Out of these first conversations, four Research Questions (RQ) emerged which seemed to speak to the interests of the participants I interviewed while corresponding with the overall intention underlying my PhD research:

What has been the trajectory of FC in time?
How to explain FC’s current level of activity and usefulness?
What were the main outcomes of participants’ involvement with FC, in terms of cognitive, relational, and experiential or affective dimensions of social learning?
What can social change-makers learn from this project? In particular, what can be learned about the...
◦ Ways of doing” that are most closely related to FC's impact in the world?
◦ “Ways of being” and worldviews, and how these have played into the life of FC?

I decided that RQ #1 and #3, being more straightforward, could be studied by conducting semi-structured interviews followed by a thematic analysis.

RQ #2 and #4, however, called for a more rigorous evaluation methodology, which would give equal weight to a variety of perspectives, and encourage participants to learn from one another.

The Fourth Generation Evaluation approach (Guba and Lincoln, 1989) seemed to fit the constructivist, participatory perspective in which I wished to undertake this evaluation. The key dynamic in this approach is the negotiation between stakeholders, and it has six main characteristics (p.8-9):

  • it views evaluation outcomes as a description of how individuals or groups make sense of their situation – not of “how things really are”;
  • it recognises a plurality of values shaping the constructions through which people make sense of the situations in which they find themselves;
  • it acknowledges that people’s constructions are linked to the social, political and cultural context in which they have been formed and to which they refer;
  • it recognises that this form of evaluation can empower, or disempower, particular stakeholder groups in a variety of ways;
  • it suggests that evaluation must have an action orientation, in order to ensure follow-up and avoid the non-use of evaluation outcomes;
  • it insists on full participatory involvement, in which participants are equal partners in every aspect of the evaluation process.

By deploying the methodology of Fourth Generation Methodology (p.184-227), the authors argue that stakeholders are mutually educated by the evaluation process, as “each group is required to confront and take account of the inputs from other groups” and “deal with points of difference or conflict.” In this process, “a great deal of learning takes place” (ibid, 56). Given the focus of my overall research on social learning, I decided to invite the study participants to engage in such an evaluation process.

However, it rapidly emerged that most interviewees were unwilling to engage in this evaluation otherwise than through individual, private conversations with me, largely due to lack of time, and to the impact of deep-seated conflict. I decided that the conditions for a productive Hermeneutic Dialectic process (p.149-155) were not met – mainly due to the lack of “a willingness on the part of all the parties to make the commitments of time and energy that may be required in the process” (p.150). Moreover, I found that generating discussions within different FC stakeholder groups was difficult, and I doubted the feasibility of establishing and mediating a forum of stakeholder representatives in which negotiation over the content of the evaluation may take place (p.73-74).

I thus decided to follow a different methodology, and to base the evaluation process, needed to answer Research Questions #2 and #4, on the use of Convergent Interviewing. Nonetheless, as I will describe below, I did make use of certain techniques that are part of Fourth Generation Evaluation, and I tried to remain faithful to the overall philosophy that I outline above.

2 Convergent Interviewing (CI)

CI has similarities with several aspects of the Hermeneutic Dialectic Process that is at the heart of Fourth Generation Evaluation. It is a flexible data collection process, based on an in-depth interview procedure characterised by a structured process and initially-unstructured content (Dick, 2017). CI is also emergent and data-driven, has a cyclic nature, makes use of a dialectic process, and can be used effectively in community change programs as part of a diagnosis or evaluation project (Dick, 2002, 2014, 2017).

Overall, the CI process can be outlined as follows (Driedger et al., 2006; Jepsen and Rodwell, 2008; Dick, 2017):

  1. Planning one’s approach with the host organisation or community. Negotiating how the research can be carried out, for it to be valuable to both the interviewees as well as the researcher.
  2. Preparing a maximum-diversity sample, which can be augmented by a modified snowball sample (in which interviewees are asked to nominate who else may be interviewed).
  3. Carrying out initially open-ended interviews, from which an evaluation develops gradually and inductively.
  4. By comparing interview results, developing probe questions to deepen one’s understanding of the emerging theory. These probe questions focus on the overlap between present data and either past data or emergent theory; they lead the researcher to seek out exceptions for agreements between present data and past data or emergent theory, and to seek explanations for disagreements. “The disagreements drive the theory development or diagnosis or evaluation” (Dick, 2017).

CI is based on a constant comparative reflexive process (Driedger et al., 2006). The cyclic nature of this process “allows the refinement of both questions and answers, and even the method, over a series of interviews or successive approximations” (Riege and Nair, 2004). This builds rigour in the continuous refinement of the research content and process, while providing flexibility, which is useful in an Action Research context.

3 Interview process – June 2020 to January 2021

In the early stages of my interviewing process, I invited several of my FC contacts to form a group of co-researchers with me. However, it appeared that this was not possible for any of them at the time. Therefore, I pursued the research process on my own.

Dick (2017) recommends that a CI process be carried out by a pair of interviewers, to enrich the analytical process. However, he also states that a single interviewer may also fruitfully use CI without losing too much rigour, as long as the interviewer takes care to “compare the second interview to the first one, the third interview to the first two, and so on. When a theory (etc.) begins to emerge, each successive interview is then compared to the emergent theory” (ibid, 20). I made sure to follow this guideline.

In order to form a sample, I asked my initial contacts to recommend other participants in the FC network who would have different backgrounds and points of view from them, while still being representative of the network. I then contacted the persons they recommended. When these responded positively to my request (which was the case about 50% of the time), I had an open-ended discussion with them, and then asked them to recommend another person I could speak to, with the same criteria as previously. In this way, a modified snowball sample of 15 participants emerged.

Interviewees were all persons who were deeply involved in FC as a project, ranging from 1.5 to over 4 years of participation (average: 3 years) at the time of first interview. They included persons identifying with either (or none) of two broad and opposing factions that seem to have formed within FC, and hailed from 8 different countries. I deemed this sample diverse enough for the purposes of understanding the history and dynamics at play within FC.

Ahead of interviewing anyone, I asked them to express their consent with taking part in this research, by sending me an electronic message containing the copy and pasted paragraph titled “Consent email” on the online information sheet. This was usually achieved by sending prospective interviewees an instant message containing this text, and asking them to reply to this message saying “I agree.” For those with whom I communicated in Spanish, I translated this paragraph into Spanish.

All my initial communications with FC participants happened over the instant messaging software Telegram. When I got in touch with a new contact, I started by asking them what would be the most comfortable way for us to carry out our discussions. 6 interviewees agreed to be interviewed over video-conference calls, and 9 via private Telegram text and recorded voice messages. I also asked them in what language they would prefer to communicate. 11 discussions took place in English, and 4 in Spanish.

In the case of video calls, taking place over Zoom or Jitsi, I took extensive notes during the interview. In the case of Telegram voice messages, I transcribed the messages I received. I translated into English notes and text messages that were in Spanish to facilitate the analytical process, and analysed all text using the thematic analysis software Quirkos.

It should be noted that while interviews happening over video calls were clearly bounded in time, and took about one to two hours, interviews carried out over instant text messages or voice messages were more continuous, as they allowed the interviewee to respond whenever they had time. This enabled several “interviews” to be taking place simultaneously, which made the process more dynamic – as I was, in effect, able to rapidly test for agreement or disagreement when a new issue was raised by someone. On the other hand, it made slightly more onerous the process of keeping track of what questions I had been asking to whom.

I made sure to regularly summarise my understanding of what interviewees were sharing with me, in order to confirm I understood them well.

In order to explore the two closely related research questions (RQ #2 and #4) that called for the use of CI, I began by asking the interviewees to tell me more about their experience of FC. In particular, I asked them what they considered had been the main challenges that FC had faced or was facing as a project, as a “general probe” question (Dick, 2017, p.7).

By comparing my notes from each interview to my corpus of previous interviews, I then gradually began building an emergent theory about the general categories of challenges that appeared to have been present in FC (ibid, p.13). In effect, I carried out an inductive thematic analysis, building on the method of Template Analysis (TA), as presented by King (2004, 2012) and Brooks and colleagues (2015). I found TA appropriate for its flexibility, and its usefulness to analyse large volumes of diverse data in a time-effective way (Brooks et al., 2015). Besides, as a “codebook” thematic analysis (Braun et al., 2019), it allows the initial development of themes early on in the analytical process, while enabling the iterative refinement of this framework through inductive engagement with the data in the process. As such, it felt well-suited to being combined with CI.

TA encourages the development of an initial template, made of a priori themes that are based on a sub-set of the data. In this case, this sub-set was the corpus of interviews I was building. The analysis then "progresses... through an iterative process of applying, modifying and re-applying the initial template" (King, 2012, p.430). I define themes as "the recurrent and distinctive features of participants' accounts... that characterize perceptions and/or experiences, seen by the researcher as relevant to the research question of a particular study" (ibid, p.430-1). In TA, themes are hierarchically organized (groups of similar codes are clustered together to produce more general higher-order codes), and "the extent to which main (i.e. top level) themes are elaborated - in terms of the number and levels of sub-themes - should reflect how rich they prove to be in terms of offering insights into the topic area of a particular study" (ibid, p.431).

Through this process, by updating the emerging template iteratively after each interview, I gradually came to build the following template of themes, corresponding to organisational issues experienced in FC by interviewees:

  • Objectives and strategy
◦ FC’s twin strategic goals
◦ FairCoin
  • Ways of doing
◦ Governance
◦ Tools
◦ Membership
  • Ways of being
◦ Mutual care, civility and trust
◦ Conflict and factions
◦ Cultural and linguistic issues

For each sub-theme, I devised probe questions testing for agreement on the relevance of each issue identified. In a spreadsheet, I kept track of the agreements, disagreements, or “no opinion” voiced by interviewees for each issue.

Dick (2017, 13) argues that “idiosyncratic information from a single participant” – i.e. data not overlapping with the existing data set or emergent theory – “can generally be ignored,” thus increasing the efficiency of the process. However, knowing from Riege and Nair (2004, p. 78) that “less important issues discarded in the earlier interviews often [emerge] again in the later interviews,” I did not ignore any issue, and instead asked probe questions for every single issue to at least five interviewees. In the Case Report (see next section), I included issues that appeared significant to at least three interviewees, and attempted to systematically point out the degree of agreement or disagreement for each issue.

Answers to the research questions that weren’t directly connected to the evaluation process (i.e. RQ #1 and #3) were also included in the convergent interviewing process, although more lightly. This was due to the broad agreement on the history or trajectory of FC as a project (RQ #1), and to similar replies to the questions that had to do with the positive and negative outcomes that interviewees voiced with regards to their participation in FC (RQ #3). Nonetheless, particularly concerning the latter, interesting answers from someone helped me develop probe questions to ask other people.

In the case of RQ #1, I triangulated the information I received from interviewees with several media reports on FC.

4 Sharing the first draft of the Case Report

In order to test my understanding of what the interviewees had shared with me, and invite constructive feedback and criticism, I decided to summarise my findings into a draft Case Report. This is recommended by Guba and Lincoln (1989, p.74) in order to “communicate to each stakeholder group any consensus on constructions and any resolutions regarding the claims, concerns, and issues that they have raised.”

In writing the report, I tried to meet the quality criteria the authors formulated elsewhere (Lincoln and Guba, 1988; Guba and Lincoln, 1989, p. 224):

  • Axiomatic criteria: making sure the report resonates with the axiomatic assumptions underlying its guiding paradigm. In this case, I attempted to reflect multiple realities, following the constructivist paradigm in which I carried out this study.
  • Rhetorical criteria: paying attention to form and structure, including unity, overall organization, clarity, and craftsmanship.
  • Action criteria: working to ensure the study’s ability to evoke and facilitate action on the part of the reader. Among these criteria, fairness, educativeness, and actionability or empowerment are prominent.
  • Application or transferability criteria: enabling the reader’s drawing of inferences which may apply in their own context or situation. The presence of thick description and the provision of vicarious experience are important in this regard.

On January 31, 2021, I shared with all interviewees the first draft report, which I titled: “FairCoop: What can we learn from this experiment?” I wrote it in English, translated it in Spanish, and then published both versions using the online platform OnlyOffice. These reports could only be read by viewers receiving the secret URLs leading to them.

The report was structured as follows:

  • Welcome message (1 page). This message gave context about the report, and stressed that it did not attempt to present “The Truth” about FC, but instead focused on giving voice to multiple perspectives. It also contained an invitation to leave anonymous comments directly on the document, in a spirit of amicable discussion – or to send me feedback via Telegram. Finally, it asked readers not to share the document with anyone.
  • Introduction (1 page). This section briefly presented the rationale for the study, the research questions, some information about the interviewees, and the methodology.
  • “A brief timeline of FairCoop” (1.5 page). In this section, I attempted to answer RQ #1, on the basis of interviewee testimonials, media reports, and scholarly articles.
  • “Summary: Some key findings” (4 and 5 pages). This was an executive summary of the results of the evaluation process, answering RQ #2, #3 and #4. To answer RQ #2, for each broad category of issues identified in the CI process, I outlined the key topics that interviewees had expressed opinions on during my interviews. To answer RQ #4, for each section I suggested “major learnings” that one might consider worth reflecting on, and which I wrote on the basis of my examination of the scholarly literature on relevant topics (see Chapter 2)37. To answer RQ #3, I summarised the main recurring positive and negative outcomes that interviewees mentioned, as regards their participation in FC. Finally, the summary invited the reader to compare the report with the contents of a participative online document, co-created by several FC participants a year earlier and without my involvement, and which explored similar questions.
  • “What went wrong?” (24 and 26 pages). This section developed the findings of the evaluation process (RQ #2). For each topic identified as an issue by at least three interviewees, I attempted to present a thick description of the issue, and quoted directly from interviewees as much as possible. I also indicated the level of agreement that existed for each issue, and called special attention to those on which different perspectives existed.
  • “Negative outcomes from FC” (2 pages). In this section, I further elaborated on the main negative outcomes that interviewees voiced with regards to their participation in FC.
  • “Positive outcomes from FC” (5 pages). Similarly to the previous section, I presented the key positive outcomes mentioned by interviewees, with a special emphasis on what they said they learned due to their participation.

As the report was quite lengthy – 40 pages in English and 43 pages in Spanish – I invited interviewees to read at least the “key takeaways” appearing on the first 8 pages (in English) or 9 pages (in Spanish). I mentioned that I would give everyone a month to read and comment on the document, in whatever language they felt most comfortable with.

I also explained to them that in order to overcome the language barrier, I would translate all comments posted from English to Spanish or vice-versa, and post the translated comment at the appropriate location on the other version of the document.

5 Feedback on the draft report

On March 3, I asked all 15 interviewees whether they had read any version of the report, and invited them to share with me privately their general impressions and level of agreement or disagreement with what they read.

I will present here a synthesis of the feedback I received on the topic of the first draft of this report. For an extended version, including quotes of messages sent to me by interviewees, please refer to the second draft of the report (discussed below).

Out of 15 interviewees...

  • 11 said they read the report, of which...
  • 3 said they hadn’t had the time to read it completely;
  • 5 said they had made comments (or sent me comments about specific parts of the report privately);
  • 4 said they didn’t make comments;
  • 10 said they fully or mostly agreed with the contents of the report;
  • 1 said they objected to the contents for being biased.
  • 2 said they didn’t read the report;
  • 2 did not respond to the invitation to read and comment.

Several interviewees shared general feedback with me in response to my request.

One person mentioned that the report “resonate[d] quite well with [their] understanding of what happened,” but that they didn’t recall that one of the issues had been so disruptive within FC (although they remembered it had been disruptive in one of the FC subprojects).

Another remarked that many of the criticisms voiced by a certain group of people within FC, and for which this group had experienced strong pushback, seemed to be generally accepted by the interviewees at large. To them, this sense of consensus made the report very valuable.

A third person, however, voiced concerns that some interviewees had been dishonest in their testimonies, and that the report contained “a biased version of reality” as a result. They also said that they didn’t want to leave any anonymous comments on the document, as they expected these other interviewees to respond aggressively.

A fourth person voiced generally positive feelings about the report, but regretted that voices from the critical group referred to above were featured too prominently.

Five other interviewees shared positive feedback about the report, voicing praise for its clarity and usefulness, and for its value in terms of learning about FC and mistakes that were made. One person mentioned they would welcome a documentary which would be produced on the basis of the report, and another found parallels between the history of FC and that of the Spanish Civil War.

34 comments were posted altogether on the two versions of the document between January 31 and March 3, 2021, when I closed the document for commenting.

Here, I will not attempt to summarise all these comments, but merely point out the main types of comments that were voiced, with special emphasis on those that shed new light on the report and on FC.

The comments notably include:

  • Historical precisions, about the FC timeline and in particular the adoption and use of the OCP tool;
  • Technical precisions, especially regarding the workings of FairCoin, the FairCoin economic ecosystem, or the way decisions are/were made in FC;
  • Disagreements about certain facts and figures mentioned by other interviewees - for example, how much money was spent on the development on OCP, or whether OCW was used to keep track of people's work. These disagreements mostly concern the main areas of tension that led to (or fed into) conflict within FC, and therefore did not surprise me much.

Another area of disagreement expressed in comments and feedback about the report which I found more interesting, although interviews had already surfaced this disagreement to some extent, has to do with the nature of the conflict and of the factions within FC. It would seem that several interviewees expressed disagreements about the report's presentation of these factions.

For example, one commenter wrote:

"Komun NEVER presented itself as a split. we were people who came from FC, who wanted to use FairCoin and work on useful tools... but we wanted, by affinity, to work without so much protocol and so many barriers as in FC."

In contrast, another person wrote:

"There was a group of people that decided to be a faction and act as a faction. But the other 'faction' wasn't so, we were just a group of autonomous people with their own opinons never acting as a 'faction' nor herd following a leader. We were called the Elite but it wasn't so, I insist. There was one group acting as a group against other people acting individually but considered by them as a faction but not being so."

These comments showed the diversity of narratives that existed as regards the conflict that broke out in FC. While one group of people (whom I referred to as "Faction 1" in the report) do see themselves as consciously united against what they perceived as injustice and mismanagement, from the first comment above, they didn't necessarily view themselves as a breakaway group. As for people who felt most in opposition to them, several have tended to say they didn't consider themselves a "Faction", especially not one that would have been following a leader (in fact, nearly every interviewee levelled at least some measure of criticism against FC).

  • Personal reflections. For example, on the topic of speculators taking advantage of the FairCoin double rate, one person wrote:

"in retrospective this was very naïve to think that this would not happen. If there is a hole in the system somebody will be take profit of it. We were kind of 'hacked' while the weak points were obvious for a long time, thinking everything will go well."

I find this comment interesting, especially given how one of the objectives of FC (see section "Objectives and Strategy" below) was to use FairCoin to hack the global financial markets. This commenter seems to imply that eventually the opposite happened. While I would hesitate to say that the global financial markets “hacked” the community, at least it appears that they were a source of profound economic instability and ideological disagreement, which brought to light deep fracture lines and simmering discontent. And the community’s social cement (“ways of being”) seems to have been too weak to repair the damage – or enable the building of new structures in place of the old.

6 Producing the second draft report

On March 5, 2021, I produced a second draft of the Case Report (in English and in Spanish). It contained all the comments from interviewees, as well as a new section at the end, which presented the summary of feedback received and the summary of comments that appears in the previous section of this document.

The report also showed (using the “track changes” function) 13 additional precisions and quotes from interviewees, voiced since the report was shared with them. I chose to integrate this information for the added nuance or new perspectives it brought to certain intricate issues – for example, that of conflict and factions.

The rest of the report remained identical to the first draft, apart from a few minor corrections.

7 Third draft report, and ethical dilemmas

On March 10, I got in touch with the founder of FC, Enric Duran, and shared the second draft of the report with him.

I had chosen not to approach Duran earlier, as it rapidly appeared that his actions were featured prominently – and unflatteringly – in the evaluation process and decided that his participation in the first interview cycles might render the process more complex.

I asked Duran whether he wanted to share feedback with me on the report or any section thereof, in writing or on a video call. However, Duran responded to none of my attempts at obtaining feedback from him, despite the two reminders I sent him (on March 24 and April 7), which were marked as “read” in Telegram.

This created an ethical quandary for me. Producing research depicting a certain identifiable person in a negative light, without including this person’s voice in the process, felt at odds with the participative ethos that I wished to follow in this research. Besides, I did not want this report to create harm – for example, Enric Duran being criticised rather strongly in it for his actions could cause him public shame and other unpleasant consequences. Although this is impossible to control absolutely, at the very least I wanted to minimise the likeliness of this occurring directly as a result of this document being published.

On top of this, one interviewee was quite clear in their feedback that they perceived the report as biased toward one of the conflicting parties, due to some of the opinions expressed within. As a result, I also felt some concern that publishing the report could re-ignite the conflict that shook FairCoop.

In the hope of gaining more clarity, and of co-designing a common decision and strategy that might minimise risks for everyone involved (in keeping with democratic AR principles), I decided to produce a third draft of the report, share it with all interviewees, and invite them to respond to a questionnaire.

The third draft report, shared on April 30 (in English) and May 5 (in Spanish), was largely similar to the second draft, except for:

  • A table of contents;
  • A description of my unsuccessful attempts to obtain feedback from Enric Duran;
  • An invitation to reflect on whether and how this information might be shared with others, beyond the circle of interviewees.

Simultaneously, I invited all interviewees to respond to an online questionnaire in English and Spanish. This questionnaire was sent along with the following message:

“Your opinion will be very important to figure out how this document could be shared with the outside world ethically, while hopefully helping to bring about positive change. I commit to balancing truth-telling with compassion, as well as respect for the time that you and others have already spent on this project, in taking this decision. I hope you will support me in doing so. If the answers to the questionnaire don't show a clear way forward, I will consult my university ethics committee for guidance."

It included the following questions:

  • Do you wish to see this report published and enter the public domain?
  • If this report were published, would you feel comfortable with Enric Duran being named in it as he is now?
  • So far, Enric Duran has not shared any feedback or response to this report. Would you be comfortable with this report being published as such, without any input or response to it from ED?
  • Would you be comfortable with this report being published while still displaying the anonymous comments that were added to it by various interviewees?
  • If you would like this report to be published, what would be the best way to do so, or the best platform?
  • Apart from publishing this report, do you see any other way(s) the information it contains could be usefully shared with other networks that try to bring radical collective change?
  • Do you agree to let me (the researcher) take a decision on whether/how to publish this report, based on the results of this questionnaire? (If you chose "No, I want this decision to be taken differently", please tell me more)
  • Would you be willing to take part in a follow-up process to decide what to do with this report? (If so, please write your name below)

9 research participants responded to the questionnaire. All of them expressed their wish to see the report published as such, and several also suggested additional formats in which the information contained within could be shared (e.g. using a video documentary). One respondent expressed the wish for a response from Enric Duran to be included before the report be published, and another mentioned that they considered some opinions voiced in the report “inaccurate and unfair to some.” All respondents agreed to let me take the final decision on whether and how to publish the report, although one of them suggested that if some other participant had strong objections to some of the content (e.g. some of the anonymous quotes), that content should be deleted first.

Three other participants responded to me outside the questionnaire, bringing the total number of respondents to 12 (out of a total of 15). Two of them agreed to the publication of the report, but suggested it be accompanied with short videos on the main findings. The third person objected to the publication, considering it “biased” – the same person who voiced critical feedback upon reading the first report draft.

Based on the ethical principles of my research, the following approach was taken towards anonymity:

  • Given that I had only offered research participants anonymity for their own input (see the Project Information Sheet in Annex 3.1), and did not indicate that I would also anonymise FC and its founder, I would be able to reveal the organisation and the founder's identity in this thesis.
  • However, to demonstrate my respect for some of the concerns expressed as regards the content of the draft report, I could share a final report with all interviewees in which both the network and its founder would be anonymised.
     

8 Sharing the final report

On October 1, 2021, I shared the final version of the Case Report with all interviewees, as two PDF files, in English38 and Spanish39. I titled this report “Organising online to make a difference. Practical learnings from an online community dedicated to creating radical collective change.” I indicated on the files that I was publishing them under a Creative Commons Attribution-NonCommercial 4.0 International License.

I made sure this report was completely anonymised. In the updated introduction, I presented the code I used, with a certain letter referring to each identifying feature – including the name of the community, the name of its founder, the name of the cryptocurrency developed in FC, etc.

This new introduction also contained more details about my stance as a researcher, stressing that the report did not pretend to contain “the whole truth,” and mentioning the absence of any conflicts of interest on my behalf. I also provided more details on my methodology, and on how I had tried to manage my own biases.

I also added an indicative bibliography at the end of the report. This bibliography listed some important examples of literature I had found useful in reflecting on the learnings that I considered relevant from this research (in response to RQ #4), and which appeared in the same executive summary section as in the previous versions of the report.

While sharing the report, I sent the following message to the research participants:

“I will not be publishing this report on any public platform, but will only share it with you and the other research participants. I leave it to your own judgment whether to share it with others or not.

My hope is that this report can be useful to you and others in order to create social change in the world. In case you hear of anyone (be it a person or a group) drawing any lessons or inspiration from this work, please let me know.”

Since then, two interviewees mentioned having shared this report on the online platform of another socially innovative project, without de-anonymising it, and that it generated useful discussions. One participant from FC with whom I hadn’t been in touch later contacted me, and acknowledged they had read the report and found value in it.