Projects rooted in artificial intelligence (AI) are fast becoming an integral part of the modern technological paradigm, aiding in decision-making processes across various sectors, from finance to healthcare. However, despite the significant progress, AI systems are not without their flaws. One of the most critical issues faced by AI today is that of data biases, which refers to the presence of systemic errors in a given set of information leading to skewed results when training machine learning models.
As AI systems rely heavily on data; the quality of the input data is of utmost importance since any type of skewed information can lead to prejudice within the system. This can further perpetuate discrimination and inequality in society. Therefore, ensuring the integrity and objectivity of data is essential.
For example, a recent article explores how AI-generated images, specifically those created from data sets dominated by American-influenced sources, can misrepresent and homogenize the cultural context of facial expressions. It cites several examples of soldiers or warriors from various historical periods, all with the same American-style smile.
Moreover, the pervading bias not only fails to capture the diversity and nuances of human expression but also risks erasing vital cultural histories and meanings, thereby potentially affecting global mental health, well-being and the richness of human experiences. To mitigate such partiality, it is essential to incorporate diverse and representative data sets into AI training processes.
Several factors contribute to biased data in AI systems. Firstly, the collection process itself may be flawed, with samples not being representative of the target population. This can lead to the underrepresentation or overrepresentation of certain groups. Second, historical biases can seep into training data, which can perpetuate existing societal prejudices. For instance, AI systems trained on biased historical data may continue to reinforce gender or racial stereotypes.
Lastly, human biases can inadvertently be introduced during the data labeling process, as labelers may harbor unconscious prejudices. The choice of features or variables used in AI models can result in biased outcomes, as some features may be more correlated with certain groups, causing unfair treatment. To mitigate these issues, researchers and practitioners need to be aware of potential sources of skewed objectivity and actively work to eliminate them.
Can blockchain make unbiased AI possible?
While blockchain technology can help with certain aspects of keeping AI systems neutral, it is by no means a panacea for eliminating biases altogether. AI systems, such as machine learning models, can develop certain discriminatory tendencies based on the data they are trained on. Additionally, if the training data contains various pre-dispositions, the system will likely learn and reproduce them in its outputs.
That said, blockchain technology can contribute to addressing AI biases in its own unique ways. For example, it can help to ensure data provenance and transparency. Decentralized systems can track the origin of the data used to train AI systems, ensuring transparency in the information collection and aggregation process. This can help stakeholders identify potential sources of bias and address them.
Recent: Why join a blockchain gaming guild? Fun, profit and create better games
Similarly, blockchains can facilitate secure and efficient data sharing among multiple parties, enabling the development of more diverse and representative data sets.
Also, by decentralizing the training process, blockchain can enable multiple parties to contribute their own information and expertise, which can help mitigate the influence of any single biased perspective.
Maintaining objective neutrality requires careful attention to the various stages of AI development, including data collection, model training and evaluation. Additionally, ongoing monitoring and updating of AI systems are crucial to addressing potential prejudices that may arise over time.
To gain a deeper understanding of whether blockchain tech can make AI systems completely neutral, Cointelegraph reached out to Ben Goertzel, founder and CEO of SingularityNET — a project combining artificial intelligence and blockchain.
In his view, the concept of “complete objectivity” is not really helpful in the context of finite intelligence systems analyzing finite data sets.
“What blockchain and Web3 systems can offer is not complete objectivity or lack of bias but rather transparency so that users can clearly see what bias an AI system has. It also offers open configurability so that a user community can tweak an AI model to have the sort of bias it prefers and transparently see what sort of bias it is reflecting,” he said.
He further stated that in the field of AI research, “bias” is not a dirty word. Instead, it is simply indicative of the orientation of an AI system looking for certain patterns in data. That said, Goertzel conceded that opaque skews imposed by centralized organizations on users who are not aware of them — yet are guided and influenced by them — are something that people need to be wary of. He said:
“Most popular AI algorithms, such as ChatGPT, are poor in terms of transparency and disclosure of their own biases. So, part of what’s needed to properly handle the AI-bias issue is decentralized participatory networks and open models not just open-source but open-weight matrices that are trained, adapted models with open content.”
Similarly, Dan Peterson, chief operating officer for Tenet — an AI-focused blockchain network — told Cointelegraph that it’s tough to quantify neutrality and that some AI metrics cannot be unbiased because there is no quantifiable line for when a data set loses neutrality. In his view, it eventually boils down to the perspective of where the engineer draws the line, and that line can vary from person to person.
“The concept of anything being truly ‘unbiased’ has historically been a difficult challenge to overcome. Although absolute truth in any data set being fed into generative AI systems may be hard to pin down, what we can do is leverage the tools made more readily available to us through the use of blockchain and Web3 technology,” he said.
Peterson stated that techniques built around distributed systems, verifiability and even social proofing can help us devise AI systems that come “as close to” absolute truth. “However, it is not yet a turn-key solution; these developing technologies help us move the needle forward at neck break speed as we continue to build out the systems of tomorrow,” he said.
Looking toward an AI-driven future
Scalability remains a significant concern for blockchain technology. As the number of users and transactions increases, it may limit the ability of blockchain solutions to handle the massive amounts of data generated and processed by AI systems. Moreover, even the adoption and integration of blockchain-based solutions into existing AIs pose significant challenges.
Recent: Crypto in Europe: Economist breaks down MiCA and future of stablecoins
First, there is a lack of understanding and expertise in both AI and blockchain technologies, which may hinder the development and deployment of solutions that combine both paradigms effectively. Second, convincing stakeholders of the benefits of blockchain platforms, particularly when it comes to ensuring unbiased AI data transmission, may be challenging, at least in the beginning.
Despite these challenges, blockchain tech holds immense potential when it comes to leveling out the rapidly evolving AI landscape. By leveraging key features of blockchain — such as decentralization, transparency and immutability — it is possible to reduce biases in data collection, management and labeling, ultimately leading to more equitable AI systems. Therefore, it will be interesting to see how the future continues to pan out from here on end.