Edtech News Round Up: The Dangers Of AI Bias
Is bias inevitable in AI if it’s already embedded in those who programmed it?
Machine learning applications are spreading rapidly across all industries. We are now seeing more and more edtech companies incorporating the benefits of AI into their learning tools.
As with most new technology implementations, there needs to be a series of concerns and questions on the effects machine learning is going to have on students, administrators, and educators.
One of the main challenges machine learning faces is data. Data is needed in order to make machine learning work, but if the data is biased it can cause misleading outcomes.
The datasets associated with education and human development are complex. The likelihood of finding a random correlation that doesn’t mean anything is higher than finding evidence of a reliable relationship.
If the data that is used in making a decision is not correct there will be something wrong with the decision too. Data is also biased if it is not representative of the population it wishes to serve.
As biased data is now a real concern, how can edtech companies eliminate bias from their tools?
Companies that want to build out machine learning need to have a reliable source of truth as a guide. Therefore, humans are needed to look over data and see if edtech tools are serving the purpose they were created to fulfill.
With hopes to better control the bias in edtech tools, educators and administrators are starting to take on the roles of decision-makers in choosing the types of edtech tools they will implement in their school and classrooms.
The responsibility for implementing non-biased AI edtech tools falls not only on the companies creating the tools but on teachers and administrators as well. Educators need to ask questions concerning the data, how it’s used to generate models, and how edtech companies can be sure that biases do not exist.
The concern over AI bias is widespread enough that TechCrunch has also run a piece on it called In the Public Sector, Algorithms Need a Conscience.
Algorithm biases can have more serious implications outside the classroom. In the new types of AI that are being used for government purposes, a mistake or incorrect decision based on an algorithm could lead to the arrest of an innocent person.
This can be seen in the use of AI-powered facial recognition. If the data is incorrect or biased it results in a misinformed decision, one that could have detrimental effects on an innocent person’s life. In the case of government uses, we should not be putting potentially biased algorithmic systems, which have no conscience, in the position of informing authority.
Further issues need to be raised about responsibility in AI, not only in education but in the real world as well. It has been suggested that in these real-world applications of algorithms the responsibility boils down to the people who are feeding algorithms information.
The first step may be putting certain codes or pledges in place for people training algorithms to adhere to, beyond what is required of their prospective company. We need a commitment in order to protect humans from the cold, hard oppressive potential that these systems have.
The UN Human Rights Council passed a resolution concerning the promotion, protection, and enjoyment of human rights on the internet. This resolution would condemn any country that disrupts internet access to its citizens intentionally.
It is from the position that the rights of people should be reflected not just in real life but also when they are operating in the digital world. This is seen as an important issue in relation to the right of freedom of expression.
These types of resolution, while they are not legally binding, put pressure on governments to adhere to human rights standards.
The resolution comes in response to concerns about an increasing number of countries using the internet as a method of controlling their citizens. Therefore, there is a need for strong human rights standards.
It is the responsibility of each country to try an address priority issues. These include the efforts of governments to undermine anonymity and encryption, as well as efforts to put pressure on private information and communication technologies to participate in censorship.