Industry

Andrea Medina-Smith on Making Research Data More FAIR

February 9, 2026 2233

It’s become cliche since Clive Humbly coined it in 2006, but data is indeed the new oil. It’s a mantra repeated by Andrea Medina-Smith, whose role as executive editor for Sage Data puts her in the top-tier of today’s “petroleum engineers,” if you will. But in contrast to the image of those petro-crats of old, Medina Smith is focused on ensuring the world benefits as a whole from this new commodity, and so both inside Sage and outside, she champions public access mandates for scientific research data, helping researchers make their data FAIR (findable, accessible, interoperable, reusable).

Her focus on the just side of data collection, storage and discovery is demonstrated by things like the research guide How Feminist is Your Open Data? she penned for Sage Research Methods’ data literacy library.

Medina-Smith took her master’s degree in library science from Boston’s Simmons College and is currently in a joint Ph.D program between San Jose State in California and Manchester Metropolitan University in the United Kingdom. For her dissertation, she’s examining the impact of two 2013 memo — the so-called Holdren memo from the federal Office of Science and Technology Policy (OSTP) titled Increasing Access to the Results of Federally Funded Scientific Research and another from the Office of Management and Budget which open with the line, “Information is a valuable national resource and a strategic asset to the Federal Government, its partners, and the public.” Fortuitously, she entered her PH.D. program just three weeks before a subsequent 2022 memo from then OSTP head Alondra Nelson (Ensuring Free, Immediate, and Equitable Access to Federally Funded Research) updating that memo by calling for free public access to all research created at taxpayer expense.

As she approached her one-year anniversary at Sage, the parent of Social Science Space, we took the opportunity to quiz her about the state of data in academe and beyond.


Michael Todd: I’m interested in your journey both before you got to Sage and then what led you to this particular job.

Andrea Medina-Smith: I originally went to library school to become an archivist. Very quickly when I got there, I discovered I didn’t think I wanted to be an archivist. But what I did want to do is learn how to do new things with old stuff. So that led me to digital archiving, and I did that for a little while and at one point when I was searching for a new job I ended up getting a position at the National Institute of Standards and Technology — which is the US metrology organization — as a metadata librarian working mostly on materials. So we had a photograph collection that needed to be catalogued, or we had a museum collection that needed to be catalogued … and I started working with their journal (the library at NIST has published a journal for over 100 years).
So I was doing all these sort of things and in 2013 two memos came out from the Office of Science and Technology Policy and the Office  of Management and Budget that basically said if you are using taxpayer money to do scientific research, through grants or with an agency, the results from that study needs to be made available to the public. We call these the public access mandates. I was assigned to work on them because I’ve been working on our journals, and also a lot of the archival theories really could apply to research data. There’s Ideas about original order and that you’re gonna give descriptions at the collection level and not at the like data point level. All those sorts of ideas had already been figured out by the archives world.

And so I started working on data. So for about 12 years I was a data librarian at NIST, helping folks learn how to make their data ‘FAIR’ – findable, accessible, interoperable, and reusable. This included how to follow the rules for public access that we had developed and in some ways how to advertise and market the fact that they’ve got these data available for people within their scientific discipline.

And about the end of 2024-25, it became clear that I was ready for a new challenge between the changes that were coming because of the new administration and also just I was ready for something new. Sage approached me; they were looking for a new executive editor for the Sage Data product. And so here I am.

Michael Todd: What’s the difference between dealing with data and dealing with other scholarly stuff?

Andrea Medina-Smith: We’ve shared data in tables within journal articles or within compendiums for centuries, really. But figuring out how to do it in the most efficient way as digital data, we haven’t entirely figured out. Journals went from paper to essentially a facsimile of paper in the PDF, and to some extent when you’re looking at it as an HTML page online, it’s pretty similar.

There are a lot of things that people have thought about doing with journal articles in a digital format. None of them have really caught on, and we’ve had the same thoughts when it comes to data. You know, we don’t just want another table that people can look at, but we’re actually trying to do is make it interoperable between systems. So your data is out there, but somebody who has a slightly different microscope should be able to slurp up that data and reuse it. It shouldn’t just be for one instrument if you’re looking at qualitative data.

Then there’s lots of humanist questions about what sort of consent did people give to those interview responses and that data? Can they be reused? Can they be posted online or just a description of them posted online and vetted, verified researchers reused? Is that data? Has the data been anonymized enough that you can’t go back and figure out who gave those responses? So there’s lots of ethical questions surrounding research data, especially the qualitative research data that isn’t there when it comes to other more “traditional” research outputs.

Michael Todd: We should probably clarify something, although I expect out audience understands this. You just you brought it up: the difference between qualitative data and quantitative data.

Andrea Medina-Smith: So quantitative data is typically numbers that can be analyzed statistically or through other mathematical means. Sage Data has mostly statistical data.

Qualitative data is based on the experience of informants or the researcher. It’s typically language-based, often prose, often interviews or survey questions like free-form survey questions. Art pieces can be considered qualitative data, the experience of experiencing art. There are different types of data out there.

My experience mostly with research data coming from the hard sciences. Now I’m working with statistical data, but that’s not the only sort of data out there. There’s all this qualitative stuff that Sage doesn’t capture within Sage Data. But that are out in lots of different public formats.

Michael Todd: But I wanted to go over some of the issues around data.  One is just the volume of data created. It’s not even a proverbial fire hose; it’s like 100 fire hoses. And then every day we get another 100 fire hoses while the other 100 fire hoses are still going.

Andrea Medina-Smith: So that’s where data librarians should come in. They should be working with researchers to curate the data, especially the most reusable data is out there. When you’re making it public, you might not be making everything public at the start, but you want to have it described in such a way and have metadata so that you, as the researcher, in six months know what you’re looking at. If you’re working in team science, we call it the “bus factor” — if you go out and get hit by a bus the next day, can others use the stuff you were working on to pick up and go where you were?

And when it comes to the volume, I think there is a place for AI, not necessarily generative AI, but AI to come in and help classify the data so that it can be used and by either your own team or others in the future. I think that’s a really interesting place where a lot of very smart people are working.

Outside of that, again, it’s a great place for librarians to be involved, to help researchers curate their work, to make sure, again, not only the most reusable stuff is out there, but that it’s actually usable because it’s findable with good metadata and descriptions, it has a persistent identifier to access it, and all those sorts of things, and more importantly that you’ve got a good license on it so you tell others how they can or can’t use that data.

Michael Todd: Again, I’m sure our audience is familiar with metadata, but I’m going to ask you to define it anyway and talk about what some of the challenges are with metadata now.

Andrea Medina-Smith: A good way to describe metadata beyond the pithy ‘data about data’ is if you look at a can of baked beans and take the label off, you just have this silver thing. That’s kind of like a digital file. You can’t really see what’s in it by just looking at it. We have an idea, but we don’t know exactly what’s in it. And so all the nutrition information, the label that tells you what the what’s in it, the title, “Baked Beans,” nutrition information. The ingredients could be like the table of contents. All those sorts of things describe what’s in it, and that’s the type of metadata and applying appropriate metadata.

It comes down to finding the right team to do it. You have researchers and they can usually do a pass because they know that data intimately, especially descriptive information about how the different files work together. That would be great coming from a researcher and then teaming up with librarians and other curators, we can end up adding more descriptive metadata, more technical metadata.

So it’s a sort of file. You need to make sure, you know, if it’s from 1992 say, we need to make sure we have a copy of MS-DOS to run this, that sort of thing. And yes, algorithmic applied metadata now isn’t going to save us, but it’s going to help us, save a lot of time because we’ll be able to use different systems that can tell us a lot about a CD-ROM that we found, or a floppy disk, or even just, you know, the file drive that was pulled over into Dropbox.

And after the team does that initial technical work, administrative descriptions come in: where it came from, who made it, when it was made, all that sort of stuff. And now with generative AI it may also be able to help do some of the classification and description for humans as well.

Michael Todd: Metadata to me kind of leads to discoverability. How do I find the data I need? Not necessarily the data I know I want or the data I know exists, because at least if I know it exists, I have a decent chance I can find it. But how do I discover both what I know is out there and also the data sets I don’t know.

Andrea Medina-Smith: So it’s getting people to do a rich description. And that’s really hard, just generally because it ends up being last things that happens within our project. And the other thing I would say about discoverability is making sure the data gets described in a way that’s appropriate for a given audience. So the demographers who are following the Census are going to understand a certain type of metadata and certain really kind of esoteric descriptions. But the general public wants something very different. They want to know, you know. “Can I get information on San Diego, California from 2000” out of this file? And what sort of questions were asked? Was it, you know, just about number of cars? Was it transportation? Was it family makeup? You know, what’s happening within this particular chunk of metadata or chunk of data?

So understanding your audiences and being able to differentiate the metadata for the right sort of discoverability, there is a push within large repositories or larger repositories to start having what we’re calling “AI layers.” So you can do a natural language search and you end up chatting with the chatbot to go back and forth to figure out exactly what you’re looking for. And then that chatbot hopefully will pull the right data, will discover the right data and give you, you know, instead of the huge table, they’ve given you just the rows that you’re looking for.

Michael Todd: So that then leads to accessibility. I know some things exist. Darwin studied the beaks of finches. I’m assuming he took calipers and measured them and wrote his findings down. Where would I find that today? How do we ensure that data is accessible and open? How do we find that kind of stuff?

Andrea Medina-Smith: First, we want to make sure that it should be open. The measurements of finches beaks probably is a great thing to be open information, but what about from an indigenous group about certain religious rights or the like. That probably doesn’t need to be open. It’s not ethically correct to have it open. You might want to be able to find information on the metadata, but not necessarily have the data itself open. So it’s learning what the appropriate level of open for a good data set is.

Next, having persistent identifiers, so persistent URLs or digital object identifiers (DOIs). Usually, it will always take you to a landing page about that data and then having other standards applied — standard descriptions, standard licenses, you know what you can and can’t do with that data. And all of those are going to vary based on the type of data, the discipline it comes from. Age can be a factor. If it’s too old, they have certain norms that would not be appropriate now. If it’s too new, it might be under an embargo so that the original researchers are still working on it. They put it out there so that they can find. They put a description of it out there so they can find collaborators, but they don’t want to make it open yet. So there’s lots of different ways to make sure that it that is findable. And if one was to go to FAIRsharing.org, you’d be able to find out more information about making stuff findable as possible.

Michael Todd: From where you sit, where do you feel we are in the open data movement. Acknowledging that things are situational, where do we stand? Are we 50 percent of the way to where we ought to be? Are we 0 percent? Are we 99 percent of the way?

Andrea Medina-Smith: I’d probably say we’re somewhere between 40 and 50 percent.

This is not 100 percent, but the stereotype from the people I was working up with, the closer you were to retirement, the less likely you were to want to be trying these new things. It was just something else to do that took you away from your research. People coming directly out of research institutions, postdocs and the like, were more open to this idea of teams, team science and open science.

From my personal perspective, science should be open because that’s the ethically right way to do it, as open as possible. It’s not always going to be that way, but that’s what I think it should be, because it’s for the good of mankind, right?

And as more people adopt these practices, not only will open data and open science become more robust, but we’re hoping that the practice of science and the way it’s done becomes more ethical and more equitable. We’re getting there. Everybody’s got their own positionality where we can’t expect people from the Global South to be doing the exact same thing as a fully funded research team from the US, but there are lots of ways to be open no matter what your positionality is.

Michael Todd: How concerned are you about data rot? And both maybe those Darwin notebooks are physically rotting or there’s problems in the digital record, etc., or perhaps even intentional malfeasance.

Andrea Medina-Smith: Digital preservation is its own huge realm. I am not as concerned as I probably should be because there have been standards around how to preserve, how much to be saving, all those sort of things for a long time. More recently a lot of it done dynamically for you, but you know, there’s failure at all points, but that’s one that I probably should be more concerned with. I just don’t think about it that much when it comes to, you know, digital rot over physical rot. I’m more concerned about the digital stuff than the physical.
Many of the “important papers” from the past 300 to 400 years, those are in already climate-controlled situations. So those end up lasting a long time, and paper-and- pen data are going to last a lot longer than more recent digital data.

Michael Todd: So that’s a kind of a sister question to that and it’s format, I had a spreadsheet on File Maker Pro years ago and due to formatting and disc type issues I ended up hand redoing that to Excel — which I assume would last longer, maybe not long enough, but long enough for my purposes. What are issues around format and how do we address them?

Andrea Medina-Smith: Typically we recommend that you use open format. So instead of Excel or File Maker Pro, you’re using a comma or tab delimited file. You know, Excel does great, amazing things where you can put in macros and you do all of this sort of stuff. But when you’re actually serving up the data to someone else or saving it for yourself for the long term, do you really need all those things? And can you strip them out just back to the actual data itself? So for documents, things like text formats, if you don’t need what’s it called formatting, like you don’t need it to be pretty, you just need the words. Text formats are great.

If you do need the word PDF or PDFA, that’s an archival format that’s going to be open and stick around for a long time. So yeah, it’s anything that has a proprietary nature to it, that’s going to change really often, and it could just disappear in the sense of the support could disappear for it if the company goes bankrupt or disappears. Not that we think that’s going to happen with the likes of Microsoft, but it could. Or they could just stop supporting one format or another and not have backward or forward compatibility.

Michael Todd: Let’s talk a little bit about disappearance, because there’s different kinds. There’s I retire and I take all my research data with me and I die and nobody knows about it, and it’s lost forever. Or I’m a bureaucrat and I don’t like a certain data set and I get rid of it. Or a there’s some accident or it was on a CD-ROM and now I can’t read it.  What can we do besides, you know, being really good stewards and stuff like that? Is there anything we can do, that individuals can do, about this?

Andrea Medina-Smith: I would say before January 2025, the biggest concern with disappearance would really be poor data management and benign neglect. Again, you retire, your things go through records management processes. Most of it is not something that’s going to be kept for a long time, so it’s gotten rid of whether or not it is historically or research-wise really important. So that’s sort of the benign neglect side or you put you’ve put it on a CD-ROM, don’t move it forward onto a thumb drive and into the cloud.

Since then — and other countries around the world have experienced this for a long time with their paper files and the like — there’s been a change in the administration and policies have come about that really mean the administration is asking for data to be taken down or even changed. And then on top of that and probably what I think will be in the end more impactful is the cutting of budgets for The U.S. statistical agencies and research agencies, so that the data that you know we’ve been collecting since 1970, we stopped collecting.

So there’s disappearance in that sense as well, where it’s just a time series ends and we don’t know if it’s ever going to be collected again. But there’s also just the issue of data being taken down, data being changed. For example, we know that data coming out of the CDC and a couple other agencies, ‘gender’ has been changed to ‘sex’ and they’ve gone back and retroactively done that. There’s a lot of groups out there who are working to capture data before it’s pulled down or before it’s changed. So the most prominent of which, I would say, is called the Data Rescue Project. So that’s a bunch of librarians and academics who are pulling data down onto their own hard drives and the like, and then putting it into a bunch of different repositories. So the ICPSR out of University of Michigan has created a new repository called DataLumos, and that’s specifically for federal data that’s being pulled in. There are groups within Europe that are making copies of American data because of the fear of it disappearing.

So there’s lots of different groups that are working on this, and it’s fantastic that they have stepped up all these individuals and groups, because unfortunately, those bureaucrats and other federal employees often can’t do it. They are prevented from protecting this data that they have worked so hard to create or to curate to make it accessible to the public. And they’re being asked to do things that probably do not sit well with them as professionals.

So there’s lots of groups that are doing it. We’ve got the data itself. What I’d like to know is in 5/10/15 years when we’re going back and to reuse this data, did we capture the metadata and how to use it in the right way? I think we’re going to find that in a lot of cases we have, but I’m afraid that there will be one or two big newsy stories that say, ‘Well, we captured this data, but we can’t use it anymore’ and that becomes its own big thing.

Michael Todd: The data, and I don’t mean to be disrespectful, often seemed kind of boring, I thought. But now the provenance of data and the reliability of data have really kind of like jumped into the top tier of concerns.

Andrea Medina-Smith: I’ve worked with the mantra that data is the new oil for a long time now. And that was in a news piece, I don’t know, 15 plus years ago. And it’s true. It’s what is moving our economy along. And those economic factors are, in my opinion, would have pushed the federal government not only to maintain the statistical data sets, but also the research data because of the tech transfer that could happen with the data being out there. It’s really a boon for the American economy.

This administration has different ideas, but what we can all agree on is that the administration sees the power of the data. If you control the data and you control the evidence, whether it’s reliable or not, there’s a sense of power in that.

Michael Todd: I wonder if you could compare the United States with the rest of the world?

Andrea Medina-Smith: So when it comes to open data, the US can sort of be thought of as behind Europe and for a couple of reasons. One, Europe is very used to working collectively and the US is not. So if you look at France or England or the like, they’ll have national repositories. We don’t have anything that is a true repository at the national level. We have repositories for various disciplines that are covered by various agencies. So agencies will have repositories, but there’s not one actual repository, not that there needs to be.

But when making or following standards and setting best practices, it’s often easier to do that with one or two big repositories than it is with our collection of many, many repositories. I know there are lots of people still working within the federal system to make those repositories as valid and reproducible as possible.

There is also an ethos of science for the public good among politicians in Europe that I haven’t seen so much here.

And then, you know, we’re light years ahead of some places that have not had the resources to focus on scientific research or statistical research and getting that out in a digital way. So maybe we’re middle of the road. We have lots of universities that are doing amazing things and we have lots of small research organizations, universities or agencies that don’t have the same sort of resources and are having a lot harder time but are expected to work collaboratively but not necessarily getting the same benefits through those collaborations.

Michael Todd: How does critical thinking was going to fit into this? What is the nexus to critical thinking and data?

Andrea Medina-Smith: So we aim through our educational systems, both secondary and tertiary, to teach people how to think critically. And one of the parts of thinking critically is being able to assess evidence. Often evidence is data. You know, can you assess it for validity, for provenance, for reliability? Can you do all of those things? And if you are a critical thinker, you should be able to do that with evidence of any sort. And data is one of those sorts of evidence.

And in daily life, convenience factors more and more can illustrate this. Algorithms rely on your data, your personal data, to make choices for you and what you’re going to see on social media. Or if you’re using ChatGPT to come up with a grocery list or the like, having an understanding of where that data behind their results comes from and how it’s being used is going to be really important for critical thinking.  So if GPT is coming up with my grocery list, but I’m a vegetarian, why does it keep suggesting chicken? Because it’s not my data alone that they’re working with. They’re working with piles and piles and mountains of data — and it’s significant that chicken comes up on a lot of grocery lists. So being able to look at data, question it appropriately and see how it’s being used as evidence is a hallmark of somebody who’s thinking critically.

Michael Todd: What are the tools that we need?

Andrea Medina-Smith: If we’re going to have a populace that’s using critical thinking and using data as evidence, we need to have access to that data, right? And there’s a bunch of different ways we can do that. We can do that through repositories, which is mostly how it’s done right now. We can do it in the old-fashioned way, which is at the bottom of an academic paper, where it says if you’re interested in this, contact me and hope that that researcher gets back to you.

We can do it in new ways, we haven’t really imagined or we’ve imagined, but haven’t quite technologically figured out how to do it at scale, which is basically computer to computer, the computer being able to act on the data based on characteristics of the data.

So those are some of the tools that we have. But first you have to learn how to use the data, right? And so there’s lots of disciplines that have little sandboxes or universities that have sandboxes, especially in the hard sciences. They’ll have various test data sets that you learn how to build your own through the lab work that you do. When it comes to statistical data and economic data, you can either pull it down from the websites or of the agencies that are creating this, or often NGOs are creating this data, and manipulate it yourself. There are tools like Python and R that you can learn to analyze and visualize those results.

Sage Data sits in this sort of middle ground where you don’t have to know a programming language. You don’t have to know where the data sits exactly within those agency repositories or agency websites or NGO websites. You don’t have to know how to use an API to pull it down because a lot of them you have to pull down using an API. So with Sage Data, you can go in, find the information and data you’re looking for and visualize it in a variety of ways.

And the one thing that I really like about it is that you can use the same set of skills across all these different sorts of data. We’ve normalized it enough so that instead of learning five different systems, I can learn one and I can bring data from the OECD, the United Nations Food and Agriculture Organization, and the US Department of Ed all together into one data set. Or I can use. I can compare similar information, similar data from different data sets and visualize that together.

Michael Todd: How big is the universe that Sage Data has?

Andrea Medina-Smith: We have close to 100 at this point. We have billions of data points, like upwards of 80 billion data points within there. We have 550 different databases and over 1100 data sets. So it goes ‘source,’ then ‘database’ and then ‘data set’ underneath it. And we classify them in 16 different topics which can all be found on the front page of Sage Data and we have a good spread throughout those.

List of the topics
Agriculture and Food; Industry, Business, and Commerce; Banking, Finance, and Insurance; International Relations and Trade; Criminal Justice and Law Enforcement; Labor and Employment; Education; Military and Defense; Energy Resources and Industries; Natural Resources and Environment; Government and Politics; Population and Income; Health and Vital Statistics; Prices, Consumption, and Cost of Living; Housing and Construction; Transportation and Traffic
We do skew a little bit toward business, economics and demographics. One that is used quite a bit is information from on criminal justice and law enforcement. We have some really kind of fun and random ones, such as we capture information on on-time departures for airlines through the Department of Transportation.

Then we have really important data about greenhouse gas emissions and the like from the UN Food and Agriculture Organization. I know that a lot of people who use our site are looking for more and more information on climate change. We have lots of information on labor statistics, so not just unemployment and the like, but how often are folks employed in various industries.

And then in our premiums we have much more, things like national and subnational statistics about the Chinese population or consumer segmentation data.

Michael Todd: So how close are we — I mean the data world — to really truly democratizing data access and usage — the ability to find and make sense of the data that is out there?

Andrea Medina-Smith: I’m gonna say it’s uneven. So some organizations, OECD, some of the stuff coming out of Census. do it really well, where somebody Joe Schmo can come off the Internet looking for information on X and not only find it, but have it visualized and contextualized in a way that’s useful to them. I think there are a lot of businesses that do amazing data gathering and can sell you that data for marketing and the like. They do really interesting things, but you need to be able to purchase that. Or you are a university student and have access to a subscription to Sage Data, Statista, Social Explorer, one of those that the university, typically the library has purchased. So again, I think it’s uneven.

 Would I love to see more? Yes. And I think that was the goal of those memos I talked about at the beginning, to make that information not only available, but usable to just a general taxpayer.

Michael Todd: My last official question: predictions for data future.

Andrea Medina-Smith: Well, it’s going to become more and more important as we base more of our decisions, both important and important on these LLMs and other AI technologies we need to make to make sure that that data is valid, that’s reliable, it’s not corrupted, and that’s just from a very basic level. Anecdotally, we got something the other day we were pulling down some data and it’s corrupted and now we’ve got to go and find somebody at the agency to see if they can find an uncorrupted version of it. Hopefully this is somewhere — and that’s a data preservation issue. it’s going to become more and more important that this data that’s being used for everything is reliable and useful.
But on the flip side of that we get back to that critical thinking and trying to figure out how much of our critical thinking can we hand off? How much of it has been built in? You know, what biases have been built into these algorithms, you know, on purpose or not. And understanding how even the hardest of hard sciences data can end up being biased or skewed. So that’s just like really interpretivist, sort of constructivist. We’re making meaning out of the world and the data.

Social Science Space editor Michael Todd is a long-time newspaper editor and reporter whose beats included the U.S. military, primary and secondary education, government, and business. He entered the magazine world in 2006 as the managing editor of Hispanic Business. He joined the Miller-McCune Center for Research, Media and Public Policy and its magazine Miller-McCune (renamed Pacific Standard in 2012), where he served as web editor and later as senior staff writer focusing on covering the environmental and social sciences. During his time with the Miller-McCune Center, he regularly participated in media training courses for scientists in collaboration with the Communication Partnership for Science and the Sea (COMPASS), Stanford’s Aldo Leopold Leadership Institute, and individual research institutions.

View all posts by Michael Todd

Related Articles

AI Tutors Support 16 Percent of Learning. What About the Other 84 Percent?
Artificial Intelligence
February 20, 2026

AI Tutors Support 16 Percent of Learning. What About the Other 84 Percent?

Read Now
Reaching Parts to Which AI Has No Access
Insights
February 17, 2026

Reaching Parts to Which AI Has No Access

Read Now
Measuring What Matters: Why Academic Pathways Need Shared Evidence, Not Just Good Intentions 
Infrastructure
February 17, 2026

Measuring What Matters: Why Academic Pathways Need Shared Evidence, Not Just Good Intentions 

Read Now
A Status Check on Hallucinated Case Law Incidents
Innovation
January 12, 2026

A Status Check on Hallucinated Case Law Incidents

Read Now
Critical Thinking is Critical in Universities

Critical Thinking is Critical in Universities

In an age of homogeneous thinking, where peers, AI or a favorite social media personality or politician present perspectives as facts, it […]

Read Now
Scientists Should Keep in Mind It’s Called the ‘Marketplace of Ideas’ for a Reason

Scientists Should Keep in Mind It’s Called the ‘Marketplace of Ideas’ for a Reason

People often see science as a world apart: cool, rational and untouched by persuasion or performance. In this view, scientists simply discover […]

Read Now
Mutually Assured Distrust and the Gyrations of Trump’s Science Policy

Mutually Assured Distrust and the Gyrations of Trump’s Science Policy

Before 2025, science policy rarely made headline news. Through decades of changing political winds, financial crises and global conflicts, funding for U.S. […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments