humane tech

Can you design algorithms without inherent bias?

(Photo credit)

Especially during these more isolated times, many of us turn to social media to keep up with friends & family, learn about relevant news, and watch endless cat videos. This is probably the most visible way that many people interact with artificial intelligence on a daily basis, regardless of whether or not they are aware of it. AI decides what content you see, in what order, or whether you see it at all in your feed.  AI is designed by humans though, so many argue that we are not doing enough to remove human bias before it gets encoded in the tech that humans build. 


#techtopic

IBM made a bold statement earlier this week by deciding not to make any more general purpose facial recognition software. IBM CEO Arvind Krishna's letter called for broader police reforms and said “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling” and human rights violations.

Amazon soon followed IBM’s lead, implementing a one-year pause on allowing police to access their facial recognition software. They also called for Congress to enact legislation to guide them. And employees at Microsoft are pushing for similar changes.

Tech continues to struggle with many challenges related to bias in product design/usage, including: 

  • when do you decide not to build something? Does that decision remove your ability to stay competitive? Does that decision have impact on whether other tech firms will follow suit?

  • how do you ensure that the tech you are building does not inherently further bias, via its design & architecture? 

  • how do you ensure that the tech isn’t used in a manner inconsistent with your company values? 

AI has provided great impact on our daily lives with respect to reducing commute times, allowing mobile check deposits, improving fraud detection and spam filters, and improving our energy operations &  power grid optimization.

Yet these successes have been tempered with countless stories of AI models exhibiting racist behavior. One directed Black Americans away from higher-quality healthcare, while another labeled a thermometer in the hands of a Black person a gun. And Google was famously rebuked for labeling Black people as gorillas in its photo-categorization software (AND then for not fixing it with a real solution). 


WANT TO LEARN ABOUT TECH BUT NOT SURE WHERE TO START? SUBSCRIBE HERE TO GET A FREE INSTANT DOWNLOAD ‘3 STEPS TO LEARN ABOUT TECH’