AI and the rise of fascism

Start it @kbc

Friday 11 August 2017

AI and the rise of fascism

Kate Crawford was by far the most illuminating speaker on AI at SXSW. She is a Principal Researcher at Microsoft, Visiting Professor at MIT, Senior Fellow at NYU and member of the WEF’s Global Agenda Council on Data-Driven Development. She may not have written the book on artificial intelligence, but she sure has published myriad fascinating academic papers on the topic.

Back in the day…

Throughout history, governments have tried to catalog people. Fascist regimes would have quite literally killed for the algorithms tech companies are using today. Their own technology left a lot to be desired, forcing them to rely on physical checks, punch cards, lie detectors… not quite scalable.  

They used physical traits and markers to “prove” that certain people were inferior, be it African-Americans or Jews, and used these labels to justify their actions. Fascist regimes needed a scapegoat to divert the attention away from their own dubious policies. Scapegoats are a great means to channel the public’s anger into one arbitrary vessel. Problem solved.

New tech, same idea

Today, we don’t send out propaganda flyers on “How to recognize a Jew”, but we are still using facial recognition technology to scan for possible culprits. A gentleman on his way to SXSW was stopped at the border. When he asked why he was being targeted, border control responded with a poignant: “I don’t know Sir, it’s the algorithm. I guess you just have one of those faces.”

It’s scary to think that government officials are using technology without truly knowing how it works. Are they actually using the technology? Or is technology using them? And it’s not just limited to governments these days: plenty of tech companies have similar algorithms. There’s no transparency regarding their use, and these algorithms are so complex that people are constantly being manipulated by them, without seeming to care. 

Al Gorithm for president

An example: have you ever taken one of those silly online quizzes that tells you what Disney character you’re most like? Or what your spirit animal is? Or your twin celebrity? Great, you’ve been profiled. Cambridge Analytics used that seemingly worthless data—5,000 data points in all, from more than 220 million Americans to create elaborate profiles that will tell you just about anything. How likely you were to vote for Trump, for example. Presidential candidates no longer win elections. Big Data does.

That’s where we hit a bit of an ethical roadblock. How do you use technology that can be used to target certain people—and only those people. There’s no such thing as unbiased data analysis. Whoever designs an algorithm subconsciously influences it.

Systemic discrimination and clever manipulation

Google Jobs shows fewer management positions with 200,000 dollar paychecks to women than it does to men. So how can you apply for a job you don’t even know exists? These practices are directly attributing to gender inequality, without any active human interference.

Another case of clever manipulation: fake news. We’re being drowned in a barrage of fake news articles. It’s confusing, and finding out the truth becomes increasingly harder. It’s the perfect breeding ground for fascist regimes. It’s a tactic that was used by the Nazis, but perfected by today’s extremists. Make the truth hard enough to find, and people will stop caring.

In need of white knights

All new technologies can be used for good and evil. That’s why Start it @kbc supports start-ups who want to unlock the potential of AI. Artificial Intelligence can be a great ally for managers, analysts and other workers alike. When used correctly, AI will help us thrive. The past is a great warning sign, so let’s be prepared this time around.

We need people and start-ups who aren’t against the technology, but who do understand how it works, and how to document it, to research the potential and the pitfalls. That way, we’ll know what data is being gathered and how analyses are executed. Then, these people and companies can set up guidelines on how to use AI to improve the world.

Two Belgian startups that are already in the business of doing great things using AI are Scriptbook and Sympl, both powered by Start it @kbc. Scriptbook uses the technology to find great movie scripts and predict their success, while Sympl uses AI to match great jobs with the perfect applicants. Pretty neat, right?

Interested in learning more on the specifics and the ethical use of AI? Check out Kate Crawford’s Ai Now initiative, which gathers people from all over the world to talk about these important topics.