The Commons of Science—Why It Takes a Village: Christine Borgman on Collaboration, Curation, and the Invisible Infrastructure of Knowledge
The Commons of Science—Why It Takes a Village: Christine Borgman on Collaboration, Curation, and the Invisible Infrastructure of Knowledge
Shalini Urs
Science as Infrastructure: National Investment, Public Good, Global Commons
Science is a vital national resource that underpins innovation, economic growth, public health, national security, and societal well-being. Governments, intergovernmental organizations, and supranational entities—such as UNESCO, the European Union, and the G20—increasingly recognize scientific research as a core public responsibility, central to both economic development and human advancement.
In his seminal 1942 work The Sociology of Science, Robert K. Merton articulated the foundational norms of modern science, later encapsulated in the acronym CUDOS: Communism, Universalism, Disinterestedness, and Organized Skepticism. The principle of communism (in the scientific context) highlights the collective ownership of knowledge, asserting that intellectual property should be shared openly to foster collaboration. Secrecy, Merton argued, runs counter to the ethos of science.
—Christine Borgman has profoundly shaped our understanding of data, information infrastructure, and scholarly communication—
This ethos underlies the International Science Council’s 2021 position paper, Science as a Global Public Good (Boulton, 2021), which affirms that science must remain a freely accessible, evidence-based, and rigorously scrutinized enterprise. These principles define science not only as a national asset but also as a global public good, essential to humanity’s collective future. In recent years, the momentum toward Open Science has grown significantly across disciplines and among stakeholders.
Science, when viewed as public infrastructure, encompasses the shared resources, systems, and knowledge that support scientific research and its applications for the benefit of society. This includes physical infrastructure like research facilities and digital platforms for data sharing, as well as the broader ecosystem of open science practices and policies that promote collaboration and accessibility.
As Pampel and Dallmeier-Tiessen (2014) noted, Open Science is a movement that seeks to make scientific research transparent, accessible, collaborative, and inclusive. UNESCO (2021) defines open science as an umbrella framework that unites multiple initiatives—open access, open data, open source, citizen science, and more—aiming to make scientific knowledge multilingual, openly available, and reusable. Its goal is to foster global scientific collaboration, accelerate innovation, and democratize access to knowledge by involving a broader range of societal actors beyond the traditional scientific community. Recognizing open science as a framework for equitable, secure, and collaborative research, U.S. federal agencies designated 2023 as the Year of Open Science to promote inclusive access, cultural respect, and reproducibility through coordinated national initiatives.
In the context of Open Science, research data has emerged as the lubricant that accelerates science. As we enter the era of the Fourth Paradigm of Science—characterized by data-intensive discovery—data has become both the substrate and the driver of scientific progress. Ensuring its integrity, accessibility, and reuse is paramount. Research Data Management (RDM) is the emerging discipline focused on organizing, preserving, and sharing research data throughout the research lifecycle. By making datasets discoverable, accessible, and reusable by peers and future researchers, RDM strengthens the transparency, reproducibility, and cumulative nature of scientific knowledge. Most research organizations now regard RDM as a foundational practice aligned with open science goals.
Christine Borgman has profoundly shaped our understanding of data, information infrastructure, and scholarly communication through her award-winning books—From Gutenberg to Global Information Infrastructure (2000), Scholarship in the Digital Age (2007), and Big Data, Little Data, No Data (2015)—alongside numerous research articles, providing insightful explorations of scholarly data culture and its challenges.
In this episode of InfoFire, I speak with Dr. Christine Borgman, a Distinguished Research Professor at UCLA, is a towering figure in the field of information science. With a career spanning digital libraries, data science, and scholarly communication, her work has profoundly shaped how we conceptualize data stewardship, infrastructure, and the social life of information. In this wide-ranging conversation, Professor Borgman reflects on the evolving nature of research data, the pressing challenges of reuse, and the institutional infrastructures—or lack thereof—that shape the flow of knowledge in the digital age.
While policy frameworks and public investment shape the macro-infrastructure of science, it is the less visible, day-to-day work—performed by researchers, librarians, data curators, and administrators—that sustains its operation. Scientific knowledge does not circulate in a vacuum; it depends on complex, collaborative ecosystems of people, technologies, institutions, and norms. Understanding science as a shared endeavor—what Christine Borgman calls an “invisible infrastructure”—is essential to grasping how research functions in practice. In short, it takes a village.
Why It Takes a Village
Despite supportive policies from governments, funding agencies, and journals, the adoption of research data sharing remains largely concentrated in fields such as astronomy, genomics, survey research, and archaeology—disciplines where sociotechnical infrastructures are more mature. Increasingly, data sharing is viewed not just as a compliance requirement but as a moral imperative, particularly in areas like biomedical science. Stingelin-Giles (2016) outlines key ethical principles for a data-sharing framework, asserts that the moral default is to share research data to ensure a just distribution of benefits. However, conditions must be applied to safeguard related rights and interests, such as privacy and intellectual property.
While the benefits of shared data are widely acknowledged, critical ethical questions persist: Who benefits? Who is excluded? What are the implications for privacy, social justice, public trust, and the governance and stewardship models necessary to ensure responsible and equitable data sharing? (Kalkman et al., 2019).
Pasquetto et al. (2017), in collaboration with Christine Borgman, explore the factors shaping data practices—such as domain expertise, scale (including data volume, disciplinary breadth, distribution, and duration), and the degree of centralization in data collection—to propose new models of scientific practice that better support data sharing and reuse. The DataWorks! Prize, an initiative by FASEB and the NIH, highlights the transformative impact of data sharing and reuse on scientific discovery and human health. Through its annual challenge, it recognizes and rewards teams whose research exemplifies the power of effective data reuse and research data management.
Open Science by Design: Realizing a Vision for 21st Century Research (National Academies of Sciences, Engineering, and Medicine, 2018) advocates for open science as the default model across the research enterprise. The report offers guidance on addressing barriers and engaging stakeholders, promoting a community-driven approach that emphasizes collaboration among researchers, institutions, and funders to build shared infrastructure, establish standardized practices, and create incentives that enhance transparency, reproducibility, and accessibility in research.
In her aptly titled 2022 article, Why It Takes a Village to Manage and Share Data (co-authored with Phil Bourne), Borgman argues that data stewardship is a collective enterprise requiring robust infrastructures, shared governance, and interdisciplinary collaboration. “Everyone’s passing the buck,” she observes wryly. Principal Investigators (PIs) remain the de facto stewards of research data, but they are seldom equipped or incentivized to ensure proper curation and reuse.
The paper advocates a shift from individual responsibility to institutional commitment. Universities that invest in infrastructure—such as data curators, software engineers, and archivists—will gain a competitive edge in attracting scholars and research funding. “It’s like libraries in the past,” she says. “Those with better infrastructure will be the research powerhouses of the future.”
At the heart of this village is data—not just collected, but curated, interpreted, and reused across contexts and communities. Yet data does not speak for itself. Its value emerges through layers of stewardship, metadata, infrastructure, and trust. While policies increasingly mandate data sharing, the real transformation lies in enabling meaningful reuse—the ability of others to find, understand, and apply data beyond its original context. This shift from merely sharing data to supporting its sustained reuse marks a critical frontier in research practice.
From Data Shared to Data Reused: Shifting the Focus
Research Data Management (RDM) streamlines the collection and organization of data from publicly funded research, ensuring accessibility and long-term usability. By transforming unstructured data from journal articles into structured, standards-based databases, RDM enables researchers to mine across multiple datasets, uncover patterns, and generate new insights. It emphasizes both data sharing and data reuse as integral, interrelated practices within the scientific community.
Data sharing refers to making research data publicly available, primarily to support transparency, scientific integrity, and reproducibility. Data sharing enables researchers to extract added value, avoid redundant efforts, explore new questions, and collectively advance scientific knowledge (Borgman, 2012; Lord et al., 2004). In contrast, data reuse involves employing existing datasets to address new research questions—a practice increasingly common across scientific disciplines. While data sharing ensures availability, data reuse brings that data to life in new investigative contexts. Pasquetto et al. (2017) emphasize the need to better understand data reuse—a concept complicated by contested definitions of “data,” “sharing,” and “open.” They propose working definitions and key research questions to advance the study of scientific data dissemination and reuse.
The reuse of research data is now recognized as a core principle of effective data management. It offers tangible benefits: cost savings, reduced duplication of effort, and the ability to validate or replicate prior findings. As a critical component of the research data life cycle, data reuse has become central to institutional and disciplinary data management plans. However, it remains a complex and evolving concept. van de Sandt et al. (2019) analyze its etymology, review definitions across the literature, and distill key characteristics, validating them against real-world scenarios to propose a clear and measurable definition of data reuse distinct from related activities.
In the context of reproducibility, data reuse entails more than simply accessing existing datasets. It requires robust metadata, standardized formats, persistent identifiers, and clear licensing—all of which foster trust and transparency. High levels of reuse are a hallmark of collaborative, well-integrated research communities and signal the maturity of knowledge infrastructures in a given field.
As recognition of reuse grows, new metrics of scientific impact are emerging—most notably, indicators that track the number of datasets reused in published research. These metrics reflect the increasing value placed on data as a shared, reusable scholarly asset.
Borgman’s recent paper, From Data Created to Data Reused: Distant Matters (with Paul Groth), pushes the conversation on open data beyond mere sharing. “Data sharing is necessary but not sufficient for data reuse,” she insists. The paper introduces a conceptual framework for understanding the distance—technical, social, cognitive, and institutional—between data creators and data reusers. These distances, she argues, often determine whether data can meaningfully be reused at all.
While open science policies have increasingly mandated data sharing, Borgman warns against oversimplifying it as a transactional act. “Sharing data is about knowledge exchange,” she emphasizes, “not just putting things in a box and handing them over.” To foster genuine reuse, she calls for institutions and funders to support curated, domain-specific, and purpose-driven sharing strategies, rather than relying on generic mandates and minimal compliance.
Astronomy, Archives, and the Art of Migration
Astronomy has long exemplified the value of research data sharing and reuse. Copernicus’s heliocentric model, which placed the Sun at the center of the solar system, was based on reinterpretation of existing astronomical observations. Tycho Brahe (1546–1601) revolutionized observational astronomy by meticulously collecting a lifetime’s worth of precise data. Although Brahe himself favored a geo-heliocentric model and was reluctant to share his data—fearing it might support Copernican theory—his records ultimately passed to Johannes Kepler after his death. Kepler, drawing on Brahe’s data and applying mathematical modeling, discovered that planets move in elliptical orbits, notably in the case of Mars. By merging geometry with physics, Kepler transformed cosmology and demonstrated how the reuse of observational data across generations can lead to foundational scientific advances.
Humanity’s fascination with the cosmos has driven the observation and preservation of astronomical data for millennia—from the ancient Venus Tablet of Ammisaduqa (first millennium BCE) to the massive digital repositories of today, such as the Sloan Digital Sky Survey (SDSS). Astronomy has always been a data-intensive science, and in the modern era, data archiving has become a central pillar of research. Institutions like NASA and the European Space Agency (ESA) maintain extensive archives of data from current and past missions, preserving these for reuse by future generations of scientists.
However, the vast volumes of data generated by modern instruments pose profound challenges. Effective data archiving requires careful planning, robust infrastructure, and adherence to best practices. Effective long-term stewardship demands not only technical solutions but also institutional coordination and clear policy frameworks. The 2023 Workshop Report on the Future of Astronomical Data Infrastructure highlights these challenges, pointing to access barriers, data duplication, infrastructural gaps, and risks to data integrity. The report underscores the urgent need for coordinated strategies to ensure that astronomical data systems remain robust, interoperable, and sustainable.
Importantly, Borgman et al. (2016) emphasize that the durability of knowledge infrastructures depends on the often-invisible work of information professionals—such as cataloging, metadata creation, and data curation—which requires continuous investment in both human expertise and technical systems. These invisible labors are foundational to infrastructure sustainability but are frequently undervalued or overlooked.
Furthering this line of inquiry, Borgman and Wofford (2021) examine how astronomers transform raw telescope observations into curated, reusable scientific data products. Their research reveals that software pipelines—while essential to this process—are fragile, often bespoke, and underrecognized components of astronomical knowledge infrastructures. They stress the importance of sustaining both public and private pathways for data reuse and offer concrete recommendations for maintaining long-term access to curated data products in astronomy.
Christine Borgman’s deep ethnographic engagement with astronomers—arguably one of the most data-sophisticated scientific communities—reveals the immense complexity of curating data at scale. Her studies of the SDSS and recent work with the Space Telescope Science Institute spotlight the critical tensions that arise in archiving, migrating, and sustaining large-scale scientific datasets.
“Astronomy has the richest infrastructure of any field I’ve studied,” Borgman observes. Yet even here, distinctions between space-based and ground-based research, varied funding streams, and shifting institutional commitments make long-term stewardship a formidable task. In one notable example, she describes a team that spent two years evaluating which parts of 160 terabytes of data were worth migrating—a process she likens to “a collection development course no MLIS program offers.”
The larger issue, she cautions, is not just preserving the bits, but maintaining the infrastructure around the data. Archives are not static repositories—they are living systems shaped by contracts, institutional relationships, technical interfaces, and curatorial judgment. As funding pressures grow and digital preservation costs rise, the risk of losing these surrounding infrastructures is as serious as the potential loss of the data itself.
In sum, astronomy’s long history of data-intensive research illustrates both the promise and the precarity of scientific data infrastructures. Without deliberate attention to governance, sustainability, and the often-unseen work of stewardship, the risk is not just data loss—but the erosion of the very systems that make scientific discovery possible.
From Data Abundance to Strategic Insight: The Case for a Chief Data Officer
Despite the hype surrounding “Big Data” in major publications like Science, Nature, The Economist, and The New York Times, Christine Borgman (2015) argues that having the right data is often more valuable than simply having more data—and that small, well-curated datasets can be just as impactful as large ones. She highlights that data sharing remains difficult due to minimal incentives and the diversity of disciplinary practices, emphasizing that data acquire value only within a knowledge infrastructure—an interconnected system of people, practices, technologies, institutions, and relationships.
Amid both the overabundance and, in some cases, scarcity of research data across universities, institutions are increasingly recognizing the need for a Chief Data Officer (CDO) to take strategic responsibility for enterprise-wide data governance. In less than a decade, the CDO role has gained considerable traction in higher education, reflecting a growing commitment to managing data as a strategic asset. The COVID-19 pandemic underscored this need, as institutional leaders had to model health, personnel, and financial scenarios in real time. While no organization escaped the ripple effects, those lacking data-driven insight faced heightened risk and scrutiny. Within the unique ecosystem of higher education, CDOs help align data practices with institutional missions and stakeholder needs.
In a 2020 podcast, San Cannon, Associate Vice Provost for Data Governance and Chief Data Officer at the University of Rochester, discussed the critical responsibilities of a CDO in higher education. These include leveraging technology for effective data management, fostering stakeholder collaboration, and building resources to support robust data governance. Yet, as Webber and Zheng (2020) observe, universities still lag behind industry, business, and government in realizing the strategic value of their data assets.
Expanding this discussion, a 2023 Science article co-authored by Borgman and Amy Brand extends the critique to the administrative data landscape in universities. Too often, institutions are data-rich but insight-poor—or worse, intentionally data-blind. Borgman calls for a new kind of leadership: a Chief Research Data Officer or “data czar” with cross-institutional authority, capable of bridging the silos that persist between libraries, IT, and research administration.
Yet she offers a note of caution: “It’s not enough to create a position. You need the right person, with the right scope, and the authority to convene stakeholders.” A well-designed role can unify institutional efforts and support effective governance. Poorly defined, it risks reinforcing the very silos it aims to dissolve.
Data Friction: Navigating Platforms, Policies, and Practices
Despite the growing recognition of the value of data sharing and reuse, significant “data friction” remains—friction not only in the technical sense, but in the social, cultural, and institutional dynamics that shape data practices.
Edwards et al. (2011), in collaboration with Christine Borgman, conceptualize this as “science friction”—the challenges that emerge in interdisciplinary, data-driven collaboration. As research increasingly depends on interoperable data, tools, and services, metadata—commonly defined as “data about data”—becomes a central concern. Yet rather than static, complete artifacts, metadata are often ad hoc, evolving, and incomplete. Drawing on ethnographic studies of environmental science projects, the authors argue for a shift in how we conceptualize metadata—from fixed products to dynamic, communicative processes that must be continually maintained to support collaboration.
Complementing this, Tenopir et al. (2011) surveyed over 1,300 scientists and found broad support for data sharing, especially for purposes like verification and extended analysis. However, practical and cultural barriers—such as limited time, funding, and institutional support—impede progress. Scientists reported satisfaction with short-term data management but were less confident in the preservation and accessibility of data over the long term. The study emphasized that initiatives such as the NSF’s data management mandates and infrastructure projects like DataNET could bridge some of these gaps, especially when aligned with disciplinary norms. Continuing this line of inquiry, a 2020 study by Tenopir et al. offers an in-depth examination of global scientific practices and perceptions related to data management—including storage, sharing, and reuse.
Alter and Vardigan (2015) examined data sharing practices across five countries and identified recurring challenges: informed consent, data management, dissemination, and validation of contributions. Their work highlights the need for resources, training, and infrastructure to normalize data sharing and ensure proper recognition for researchers who do the work of making their data accessible.
Published in 2016 in Scientific Data, the FAIR Guiding Principles for scientific data management and stewardship aimed to enhance the Findability, Accessibility, Interoperability, and Reuse of digital assets. The principles emphasize machine-actionability—that is, enabling computational systems to find, access, integrate, and reuse data with minimal human intervention. This focus reflects the growing reliance on computational tools to manage the increasing volume, complexity, and velocity of data. Today, the FAIR principles are widely recognized as a foundational framework for research data exchange.
In a more recent context, Chabilall et al. (2024) discuss challenges faced by health researchers in sub-Saharan Africa. Their study surfaces a mix of structural and ethical concerns—from academic recognition and data governance to legal ambiguities and social trust. They call for clearer guidelines, robust legislation, and formal data-sharing agreements as prerequisites for building a trustworthy and equitable data-sharing culture in global health research.
The question of how to measure the impact of data sharing is also gaining traction. Emanuele and Minoretti (2023) explore this in relation to the NIH’s Data Management and Sharing Policy, proposing a “Shared Model for Shared Data.” They find that traditional author-level metrics fall short, and advocate for broader indicators—such as economic value and intangible benefits—to better capture the outcomes of primary data sharing.
Over the past two decades, numerous repositories and platforms have emerged to support research data management (RDM) and promote best practices. Some prominent examples include:
- Dryad Digital Repository – a global repository for data linked to scholarly publications.
- figshare – a platform for making research outputs citable, discoverable, and shareable.
- UK Data Archive – a leading resource for social science and humanities data.
- Harvard Dataverse – a widely used data repository supporting collaboration.
- ICPSR – a consortium for access to extensive social science data resources.
- DataONE – a network of environmental and earth science data repositories.
Murillo (2020) investigates 132 scientific data repositories (SDRs) to assess their support for data sharing, reusability, and adherence to FAIR principles, evaluating whether the information provided meets scientists’ needs for effective data reuse. Boyd (2021) examines research data repositories as infrastructures, analyzing metadata from 2,646 entries in re3data.org to identify their infrastructural characteristics. The study highlights how these repositories serve as critical information infrastructure for the scientific community, contributing to the socio-technical understanding of data repositories.
These platforms aim to make data sharing frictionless—but technology alone is not enough.Over the past fifteen years, a substantial body of scholarship has examined both the virtues and challenges of data sharing. Policies have evolved, platforms have matured, and funding mandates have increased. Yet data friction persists, embedded not in code but in culture. As Borgman warns, the obstacle is not merely technological—it is social and institutional.
“Technology can either create friction or serve as lubricant,” she notes, referencing her earlier work on Science Friction. Tools like Figshare or Dataverse may offer technical solutions, but unless policies and technologies are co-designed with the social realities of research in mind, those solutions fall short.
Silos persist not because of technical limits, but because of entrenched institutional cultures. Even seemingly simple efforts, like making course syllabi open, run into fierce academic resistance. “It’s not about the system—it’s about what the system represents.”
Three Decades of Transformation: From Digital Libraries to Knowledge Infrastructures
Over the last three decades, the landscape of scientific information systems has undergone a profound transformation—from early digital libraries to expansive cyberinfrastructures, and more recently to complex, dynamic knowledge infrastructures. Each phase reflects shifts in technological capability, epistemic norms, institutional coordination, and sociotechnical thinking. Christine L. Borgman has been a central figure in articulating, shaping, and critically examining this evolving terrain.
Digital Libraries: Access and Organization (1990s)
In the 1990s, the emergence of digital libraries marked a turning point in the organization and accessibility of scholarly information. The focus was on digitizing content, creating interoperable metadata standards, and building platforms to make scholarly materials available online. Borgman (2000, 2007) was at the forefront of this movement, contributing both empirical studies and theoretical frameworks that emphasized the social dimensions of scholarly communication. Her work helped define digital libraries not merely as repositories of content but as knowledge environments embedded in disciplinary practices and user behaviors.
Cyberinfrastructure: Platforms for Science (2000s)
By the early 2000s, the concept of cyberinfrastructure gained prominence—most notably through the U.S. National Science Foundation’s initiatives to support large-scale, data-intensive scientific collaboration. The goal was to build robust, shared computing and data environments to accelerate discovery across disciplines (Atkins et al., 2003). In one of the early episodes of InfoFire, Dan Atkins takes us through the fascinating history of computers from the days of vacuum tubes and computers the size of football fields to the initiatives and cyberinfrastructures for eScience and Open Science. Borgman (2010) participated actively in this era, serving on advisory boards and NSF working groups, while also offering a critical lens. She highlighted that technical infrastructure alone was insufficient; the success of cyberinfrastructure depended on understanding the social and organizational infrastructures that supported data sharing, stewardship, and reuse.
Knowledge Infrastructures: A Sociotechnical Turn (2010s–present)
The limitations of purely technical frameworks gave rise to the more holistic concept of knowledge infrastructures—a term that reflects the entanglement of people, technologies, institutions, practices, and policies involved in producing, managing, and sustaining knowledge. Borgman’s landmark book Big Data, Little Data, No Data (2015) articulates this shift. Drawing on ethnographic case studies of fields like astronomy, seismology, and biomedicine, she redefined data not as inert objects, but as situated, contingent, and deeply embedded in the contexts of their production and use.
Borgman also played a convening role in advancing this conceptual shift. In 2012, she co-organized the Workshop on Knowledge Infrastructures at the University of Michigan, which brought together scholars from information science, sociology, computer science, and policy to explore emerging frameworks (Edwards et al., 2013). This was followed by a second landmark workshop at UCLA in 2020, co-led by Borgman, which expanded the conversation to include global and interdisciplinary perspectives on sustainability, equity, and governance of knowledge infrastructures (Borgman et al., 2020).
Continuities and Futures
Across these three phases, what persists is Borgman’s insistence that infrastructures are never neutral. Whether digital libraries, cyberinfrastructure, or knowledge infrastructures, these systems reflect choices—about what knowledge counts, whose data is valued, and how scientific collaboration is shaped. Her scholarship underscores that meaningful data sharing and reuse require not only technical solutions but also social commitments, governance frameworks, and deep engagement with disciplinary and institutional cultures.
As research data continues to grow in volume, velocity, and visibility, understanding the evolution from digital access systems to sociotechnical knowledge infrastructures is essential. Borgman’s body of work provides both a map and a moral compass for navigating this complex terrain.
An Interdisciplinary Compass: Mentorship, Method, and the Power of the Field
Borgman’s intellectual compass has always pointed toward knowledge exchange as a social process. With a background in mathematics, computing, library science, and communication, her career has been defined by productive interdisciplinarity. Whether collaborating with astronomers or cognitive scientists, her method is clear: do the homework, learn the domain, and find common ground.
“The delight,” she says, “is in the exchange.” But she also cautions, interdisciplinary work carries overhead. True collaboration takes time, trust, and mutual respect.
Reflecting on her career, Borgman names Clifford Lynch and Michael Buckland as two of her most profound intellectual influences. She credits long, unstructured conversations with Lynch—about books, trends, and ideas—as formative. “He was a node in the network,” she recalls. “And a generous one.” Buckland, too, played a pivotal role early in her career, helping her navigate professional networks and scholarly terrain with care and curiosity. Borgman acknowledges Dean Robert Hayes, astronomer Alyssa Goodman, Nobel laureate Andrea Ghez, Buddhist scholar Stefano Zacchetti, and others as mentors and collaborators within her scholarly networks.
In turn, Borgman has mentored countless students and scholars across domains, often through hands-on, project-based courses that embed information science students within disciplinary research teams. “It opened doors,” she notes, “not just for the students, but for faculty who came to appreciate what the information field can offer.”
A Call to Action: Together, We Can and We Should
At a time when data is both abundant and ephemeral, and when infrastructures are simultaneously everywhere and invisible, Borgman’s work calls for a fundamental rethinking of how we steward, share, and sustain knowledge. Her message is both urgent and hopeful: we need to invest in the invisible work of knowledge infrastructures, and we need to do it together.
As she aptly concludes: “We’ve been under appreciated in the information field—but if we stay in our own little corner, we’ll stay under appreciated. The only way forward is through collaboration and showing what we can do.”
Investing in knowledge infrastructures—both visible, like technological platforms, and invisible, like professionals, policies, and practices—is the way forward.
Despite its centrality, there is no universally agreed-upon definition of “infrastructure.” Yet, as Brett Frischmann (2012) frames it—as a “shared resource to many ends”—infrastructure is foundational to economic, social, and political advancement. It enables the creation and delivery of both public and private goods. Framing research data and its attendant systems as digital public infrastructure allows us to reimagine the architecture of science itself. It points us toward a collective vision in which scientific knowledge is not merely produced and consumed but sustained and stewarded by a community. In that sense, realizing the promise of science as a commons—and ultimately as society itself—truly does take a village.
Cite this article in APA as: Urs, S. (2025, July 24). The commons of science—why it takes a village: Christine Borgman on collaboration, curation, and the invisible infrastructure of knowledge. Information Matters. https://informationmatters.org/2025/07/the-commons-of-sciencewhy-it-takes-a-village-christine-borgman-on-collaboration-curation-and-the-invisible-infrastructure-of-knowledge/
Author
-
Dr. Shalini Urs is an information scientist with a 360-degree view of information and has researched issues ranging from the theoretical foundations of information sciences to Informatics. She is an institution builder whose brainchild is the MYRA School of Business (www.myra.ac.in), founded in 2012. She also founded the International School of Information Management (www.isim.ac.in), the first Information School in India, as an autonomous constituent unit of the University of Mysore in 2005 with grants from the Ford Foundation and Informatics India Limited. She is currently involved with Gooru India Foundation as a Board member (https://gooru.org/about/team) and is actively involved in implementing Gooru’s Learning Navigator platform across schools. She is professor emerita at the Department of Library and Information Science of the University of Mysore, India. She conceptualized and developed the Vidyanidhi Digital Library and eScholarship portal in 2000 with funding from the Government of India, which became a national initiative with further funding from the Ford Foundation in 2002.
View all posts