Rethinking Science Funding: A Complementary Paradigm

Intellart
4 min readNov 16, 2023
Photo by Vlad Tchompalov on Unsplash

Research is generally regarded as transparent, ethical, and rational. And yet, it is not uncommon to come across speculations on the motivations and interests of specific scientists and their alleged collusions with private foundations or corporations. Most of these conspiracies have no basis, but they nonetheless appear because of real cases of scientific misconduct which are, sometimes, poorly addressed.

In Western countries, scientific research is funded through grants and private collaborations with public institutions. Importantly, most of the funds are allocated without consulting citizens.

As an alternative or supplement to the existing system, one can envision a model in which the community can vote on projects they deem relevant and allocate resources to support them. This system would be implemented on a blockchain, with all transactions being publicly and comprehensively documented. Other advantages include granting citizens a voice, providing scientists with insights into the genuine interests of society, and enabling anyone to scrutinize funding allocation, thereby reducing the likelihood of collusion (or the illusion of it) between public and private interests.

This sounds fantastic in theory, but it would be remiss not to mention the challenges that such a system will undoubtedly encounter. First, it automatically excludes anyone lacking familiarity with blockchains or is simply not tech-savvy enough. Second, for most citizens, evaluating the merits of a scientific project is highly challenging: Is it realistic? Is the funding justified? Does the researcher possess the necessary expertise? Third, some scientists may exploit this system by promising enticing projects that garner community votes but lack solid scientific grounds.

So, what could the solution be? A conventional one would involve introducing an external oracle system to rate the scientific reliability of projects. However, this approach contradicts the essence of a blockchain, as it runs counter to the principles of a decentralized system. An alternative path could involve relying on Open Science tenets. A by-product of making science universally accessible is the capacity to shift the responsibility onto participants to stay well-informed and objective. Nevertheless, it is essential to bear in mind that self-regulated democratic systems can result in unexpected outcomes. Given these considerations, the question remains: is it worth the risk?

Research involves the exploration of questions that were initially posed by scientists in the past. Nothing emerges from thin air, not even the most unexpected breakthrough. This is precisely why scientific articles cite previous work: some may adopt pre-established techniques, while others build their experiments upon the conclusions drawn from prior studies. (Fun fact: there is an unwritten rule that generally discourages citing articles to criticize or attack them.) Therefore, a useful metric for assessing the quality or impact of an article is to determine its citation count, which can be easily retrieved using Google Scholar. However, in the current landscape, the focus often appears to be on the publication venue rather than the impact the research has had within community. In other words, it is generally seen as more prestigious to publish a paper in a journal like Science, Cell or Nature than to have a paper with hundreds of citations.

A possible way to better evaluate the quality of a scientist is to measure the citations they accumulate. In addition, the dates of these citation can offer insight into whether the scientist remains up to date in their field. Nonetheless, this metric remains somewhat biased because popular scientists are more cited than their peers within the same field. This phenomenon is partly due to their fame translating into influence, creating an incentive to cultivate favorable relationships with them. Additionally, their discoveries tend to be highly publicized, contributing to their elevated citation counts. Despite these limitations, the need for human intervention when utilizing such a score[1] would be relatively minimal compared to an external review by experts. Consequently, this type of metric could serve as a signal in decentralized and self-governed funding systems.

While this may sound revolutionary, confronting the status quo directly is often not the best approach. At Intellart, we place our belief in delivering value through the development of incremental solutions that progressively navigate the economic, social, and academic intricacies of the current science publishing system. If you’re curious about the nitty-gritty of our solution-building journey, our blogs are the place to begin. We assure you; it won’t be boring.

Most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow.” — Jeff Bezos

References

[1] H-index: https://en.wikipedia.org/wiki/H-index, “The h-index is defined as the maximum value of h such that the given author/journal has published at least h papers that have each been cited at least h times”; or other metrics, https://en.wikipedia.org/wiki/Author-level_metrics such as the i-10-index (number of publications with at least 10 citations).

--

--

No responses yet