Algorithms, errors and police

 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
YouTube

Video is very short, but the tl;dr is that technology isn't neutral and machines can and do learn bad things from us. Among others, this is particularly troublesome in the area of law enforcement and criminal justice.

Quote
The rise of big data policing rests in part on the belief that data-­based decisions can be more objective, fair, and accurate than traditional policing.

Data is data and thus, the thinking goes, not subject to the same subjective errors as human decision making. But in truth, algorithms encode both error and bias. As David Vladeck, the former director of the Bureau of Consumer Protection at the Federal Trade Commission (who was, thus, in charge of much of the law surrounding big data consumer protection), once warned, "Algorithms may also be imperfect decisional tools. Algorithms themselves are designed by humans, leaving open the possibility that unrecognized human bias may taint the process. And algorithms are no better than the data they process, and we know that much of that data may be unreliable, outdated, or reflect bias."

Algorithmic technologies that aid law enforcement in targeting crime must compete with a host of very human questions. What data goes into the computer model? After all, the inputs determine the outputs. How much data must go into the model? The choice of sample size can alter the outcome. How do you account for cultural differences? Sometimes algorithms try to smooth out the anomalies in the data—­anomalies that can correspond with minority populations. How do you address the complexity in the data or the "noise" that results from imperfect results?

Sometimes, the machines get it wrong because of racial or gender bias built into the model. For policing, this is a serious concern. [...]

As Frank Pasquale has written in his acclaimed book The Black Box Society, "Algorithms are not immune from the fundamental problem of discrimination, in which negative and baseless assumptions congeal into prejudice. . . . And they must often use data laced with all-­too-­human prejudice."

Inputs go in and generalizations come out, so that if historical crime data shows that robberies happen at banks more often than at nursery schools, the algorithm will correlate banks with robberies, without any need to understand that banks hold lots of cash and nursery schools do not. "Why" does not matter to the math. The correlation is the key. Of course, algorithms can replicate past biases, so that if an algorithm is built around biased data, analysts will get a biased result. For example, if police primarily arrest people of color from minority neighborhoods for marijuana, even though people of all races and all neighborhoods use marijuana at equal rates, the algorithm will correlate race with marijuana use.

The algorithm will also correlate marijuana with certain locations. A policing strategy based on such an algorithm will correlate race and drugs, even though the correlation does not accurately reflect the actual underlying criminal activity across society. And even if race were completely stripped out of the model, the correlation with communities of color might still remain because of the location. A proxy for racial bias can be baked into the system, even without any formal focus on race as a variable. [...]

As mathematician Jeremy Kun has written, "It’s true that an algorithm itself is quantitative—­it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups."

Big data policing involves a similar danger of perpetrating structural racism and negative biases about minority groups. "How" we target impacts "whom" we target, and underlying existing racial biases means that data-­driven policing may well reflect those biases.

This is a much bigger problem than most people realize. It's only really entered the spotlight over the past two or three years and is only just now becoming mainstream. Figured I'd make a thread about it to bring some life to Serious and because this is what I am currently working on (making AI accountable).


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.


Saleem | Heroic Unstoppable!
 
more |
XBL:
PSN:
Steam: Koalgon
ID: Saleem
IP: Logged

2,504 posts
Sigs fo nigs


 
challengerX
| custom title
 
more |
XBL:
PSN:
Steam:
ID: challengerX
IP: Logged

41,690 posts
I DONT GIVE A SINGLE -blam!- MOTHER -blam!-ER ITS A MOTHER -blam!-ING FORUM, OH WOW, YOU HAVE THE WORD NINJA BELOW YOUR NAME, HOW MOTHER -blam!-ING COOL, NOT, YOUR ARE NOTHING TO ME BUT A BRAINWASHED PIECE OF SHIT BLOGGER, PEOPLE ONLY LIKE YOU BECAUSE YOU HAVE NINJA BELOW YOUR NAME, SO PLEASE PUNCH YOURAELF IN THE FACE AND STAB YOUR EYE BECAUSE YOU ARE NOTHING BUT A PIECE OF SHIT OF SOCIETY
>minority communities tend to be less affluent, not as well educated, and does not provide as much of a chance of social upward mobility
>minority communities as a result have a higher crime rate
>crime data being taken in acknowledges those areas are committing an disproportionate amount of crime in comparison to other areas
...
>so this means the technology is biased for noticing trends that occur among different racial communities

Really?
You're not white you bitch ass malinchista


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
What you're saying is a common misconception. You're missing a few key things, which I'll try to explain briefly.

1. Statistics are easily manipulated and interpreted in different ways. They're a guideline but insufficient to dictate policy on their own and prone to misuse. The real problems arise when big data is used not just to provide information but to make decisions affecting individual people. Imagine you're a man looking to become a teacher. The school district employs an algorithm to assess job applications and potential candidates. This system takes into account dozens of characteristics and data points to evaluate your profile. One of the things it learns from the data it's trained with is that men make up the vast majority of sex offenders and are responsible for almost all cases of teachers sexually or physically abusing students. As a result, it ties men to these crimes and incorporates this into its decisionmaking. Every man, by default, gets a point deduction because they're a higher risk profile and will systematically be hired less. This goes for a dozen different things. Say you're applying for a college. Its algorithm determines that people from your state / area / region / background tend to drop out more often than the average. Since every student is an investment, colleges want successful ones. As such, your name is by default put at the bottom of the list despite no person at the school having met you or being able to assess you on your merits alone. The same thing applies here. All of this, as you say, is based on accurate, real and reliable facts that notice trends in our society, yet I think you're going to have to agree that it's far from fair.

2. These algorithms exacerbate existing problems and biases by creating a feedback loop. Say the system identifies an area or a specific group that has a lot of issues with crime. As a result, the police focuses its attention there and deploys more cops with a specific mandate. You will see that even more crime is now recorded in this area, merely due to the fact that there's simply more cops that are actively looking for it. There's not any more crime that there was before, but it's just noticed more often. You then put this new data in the system and voila - feedback loop. "The computer was correct, we listened to it and caught more criminals who are black". This can lead to adverse effects and the over / under policing of certain areas. The attributes you've found serve as a proxy for race and rather than fairly policing anything, you're now effectively policing people based on the color of their skin. Take the policies like stop and frisk or random traffic stops. There's been a lot of research into this finding substantial racial bias in how they were executed. If you now use that date to train a computer to determine who should be "randomly" stopped, you'll find that it also focuses more on blacks. Aside from the problem of how this affects innocent individuals (see 1.), simply by focusing more on blacks, you'll now find more criminals among them. That's basic logic. Feed this back into the system and you'll end up with a situation where whites are given a pass or stopped less and less based on the assumption that they're less likely to be criminals, but this assumption is already based on previous data (analysis) and can therefore exacerbate the issues and bias. This can lead to the underlynig problem being ignored and existing problems being continued rather than fixed.

3. You now also institutionalize the problem. It's easy to check a person and have them justify certain actions in order to determine if they're prejudiced or wrong, but it's a lot harder with a very intelligent computer. People take what technology says for granted and trust that it's neutral, fair and accurate, while it very often isn't. The more we move towards machine learning, the more we run the risk of incorporating these issues that are potentially extremely difficult to detect. A famous example is that of image recognition software distinguishing between different animals. An AI was trained to do this and it became extremely good at it with very little effort. So good that the people who created it became skeptical. Want to take a guess what they found when they really put it to the test? I'll comment later.
Last Edit: January 11, 2018, 11:34:19 AM by Flee


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.
Last Edit: January 11, 2018, 06:51:05 PM by Zen


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Sorry for the late reply by the way, I just got home recently, started writing, and got logged off by accident because I forgot to click the keep me logged on button.
No worries, take your time. I'll respond in parts to keep it digestible and make it so that I don't write it all at once.
Got a bit carried away with this section though, so I'll keep it shorter for the next.

I agree with everything you say in the first paragraph (how socioeconomic status affects crime rates and why they’re typically higher among minorities), but I think you’re kind of missing my point. No one is really “blaming” the data here, at least not in the way I think you’re interpreting it. When people say that big data is biased or dangerous, they don’t necessarily mean that it’s evil, wrong or intended to discriminate, but they are usually referring to one of two things.

One, that the data itself is flawed because it’s based on inaccurate information (for example, a dataset of the race of possible suspects as identified by eye witnesses which is faulty because, as studies have shown, people are much more likely to fill in the gaps in their memory with prejudices so that white offenders are often mistakenly identified as blacks by victims) or because it’s collected in a skewed way (misleading sample – an analysis of how men deal with problems based entirely on information about prisoners convicted for violent crimes but representing men as a whole, for example). Two, that the analysis of potentially accurate data and decisions made on the basis thereof cause biased, adverse and harmful effects against a certain group of people or individuals. For example, poor people are more likely to commit (certain) crimes. You’ve already explained why (socioeconomic status leads to less opportunities and more likely involvement in crime). This is a fact that’s backed up by statistics and hard data. However, if you feed that kind of information to a learning algorithm (or it figures it out on its own), it can cause biased and dangerous results.

- Information: most people involved in a certain type of crime are poor.
- Correlation: poor people are a risk factor for committing crime.
- Outcome: if two suspects are equal/identical in every way with the exception of their income, the poor person should be the one who is arrested and who should receive a harsher punishment, as he is more likely to have committed this crime and potential future crimes.
- Outcome 2: the poor person is now arrested and receives a harsh sentence, which is information that goes back into the system. This reinforces the belief that poor people should be arrested more and receive harsher punishments because the algorithm was proven “right” last time (in the sense that the police and criminal justice system followed its lead – regardless of whether or not the person was actually guilty), meaning that the algorithm is even more likely to target poor people next time.

All of this is based on solid data. It’s a valid and legitimate representation of an aspect of our society. Yet still, it can clearly cause disparate issues. If you let a big data analytical system tasked with finding patterns and learning about profiles into the decision-making process, you can definitely cause problems even though they seem based on solid information.

Poor people are more involved in crime > more police focus on poor areas, less on richer areas > more poor people arrested and convicted > even more evidence that the poor have criminal propensity > even more focus on the poor (reinforcing the “prison pipeline” and putting more of them in prison, which we know teaches criminal habits and make them less likely to be employed afterwards so they remain poor) and even more sentences and harsh judgments > system grows increasingly more “biased” against poor people because of the feedback loop (it’s being proven “right” because more poor people are going to jail because of it) > the current problems of inequality persist, the underlying problem goes unaddressed, minority communities are further ostracized, the rich/privileged are given more “passes” while the poor/disadvantaged are given less leeway and more punishments > social mobility is stifled and the divide between the rich and poor grows because institutionalized computer systems serve as an added obstacle…

And all of this happens on the basis of cold, hard and factual data paired with a very smart computer. This is just one of the dozens of possible scenarios, but I hope that this clarifies what I meant. Data is not necessarily wrong or inherently bad, even when it’s “biased”. The point is that technology can pick up on these inequalities / problems / different treatments and actually reinforce them further because it considers them the norm. The risk is that algorithms learn from data, create generalized (and potentially “prejudiced” or “biased) profiles, and then apply them to individuals (“your father was abusive which means that you’re more likely to be abusive too so you’ll be denied a job since you’re considered a potential abuser regardless of the person you are”) which suffer as a consequence but have almost no way to fight back because their disparate treatment is (often wrongfully) legitimized as “well the computer says so and it’s an intelligent piece of technology so it’s neutral and always objective”.

That said though, I'm not at all against these new technologies. I think they carry great promise and can be very useful. I even work on them to provide legal advice and aid their development in legally and ethically compliant ways. I just wanted to point out some of the dangers and hear what people thought of it.


MyNameIsCharlie | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: MyNameIsCharlie
IP: Logged

7,795 posts
Get of my lawn
So Garbage In, Garbage Out.

Police tend to work high crime areas and one would think only those areas have drug issues. Unless everywhere is polled, you couldn't draw any usable data out. You only see where the police are.


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
IRegardless, even with the systematic bias towards male teachers, I can say from my own personal anecdotes that metric has not as great of an effect as you think.
I know it's not like that now, but my post relates to what can very well happen the more we rely on algorithms to make these decisions based on big data analytics. They learn patterns and apply them back in a way that ultimately reinforces them. Even if you explicitly tell a system not to look at whether someone is male or female, there's plenty of proxies it can use to indirectly tell someone's gender. You say it's fair to consider these things (which it very well might be), but the stronger the pattern the larger the "weight" assigned to the value. It's easy to say that it's fine for you right now and that you don't mind them considering this, but imagine a computer deciding a woman with lower qualifications and less experience than you should get the job because she is a woman (which the majority of teachers are, so it considers this "good" and the norm) and because you're a man (which ties to higher rates of sexual abuse, physical violence and overall a higher rate of getting fired). Would that still be fair?

And this can and does go pretty far. You're applying for jobs or colleges. Your profile is checked and scored based on how well you would do. Aside from your own qualifications (degree, experience, traits), you're also scored against a general profile made of you based on similarities and the information they have on you. Your name sounds foreign or Spanish? Shame, but -10 points on language skills because statistically those people are less fluent in English than "Richard Smith" is. You're from area X? Ouch, well that place has some of the highest substance abuse rates in the country so you'll get a -10 on reliability because, statistically speaking, you're more likely to be a drunk or drug addict. You went to this high school? Oof, people from that school tend to have lower graduation rates than the national average so that's a -10 on adequacy. Your parents don't have college degrees? Sucks to be you but it's a fact that children of college educated parents are more likely to score well on university level tests, so -10 on efficiency. That's -40 points on your application based entirely on hard, solid and valid statistics or facts. Perfectly reasonable, no? Only, it aren't facts about you. It's facts about people like you taken on average. And of course, this will hold true for many like you. They won't do as well, they will fail more and they might in the end drop out. But for many, this doesn't ring true. They aren't drunks, they are motivated, they would get good grades and they do speak English well. But in the end, they don't even get the chance to try because the system rejects them. This likely condemns them to worse jobs, a lower education and ultimately an almost guaranteed lower social status, all while people from a "good" area with rich, white parents get more opportunities so that the inequality and the social divide grows while social mobility drops.

Obviously, this is an exaggeration. It doesn't happen now, but it very well could in the not so distant future. As machine learning and AI become more commonplace and powerful, and the amounts of different data they are fed with continues to grow, it becomes increasingly difficult to ascertain exactly what goes on in their "brain". And as these systems are almost always proprietary and owned by companies, there's almost no real way to look into them and find out how they work - especially not if you're just an ordinary person.

And in the area of policing and criminal justice, this is just as big of a problem. It's actually one that we're already seeing today. In several US states, there's a program being used called COMPAS. It's an analytical system that evaluates criminals to assess their risk and likelihood of recidivism. Based on information about them and a survey they're administered, it generates a score showing how likely this person is to commit crimes again. This score is used for several things, from judges to determine their sentence or the bail amount to parole officers deciding on their release. Sounds good, right? Well, there's two big problems. One, no one knows how it really works because the code is proprietary. Two, there's been studies done finding that it's racially biased. The general conclusion was that black defendants were often predicted to be at a higher risk of recidivism than they actually were, while whites were predicted to be lower risk than they actually were. Even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants. Is this because the system was built by a racist or was intentionally programmed to be racist? No, it's because of hard and factual data being used to treat people based on a profile.

I'll probably finish the rest tomorrow in a shorter version.


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.
Last Edit: January 13, 2018, 10:24:25 PM by Zen


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
I understand what you are saying about the feedback loops. However, to address that concern, this information should not be treated as the same data set, rather a subset of that data set. Like I discussed from my previous post, when an area is provided with an more intensive treatment for the purpose of remedying the difference between that area and the norm, the data should be used for analysis of how the area is improving over time. Think about it like a science experiment, the area is receiving a new variable into its equation (the increased police presence). Treating the area like the other areas is what supports that feedback loop.
I feel like we're talking about two different things here as I don't see how your solution would address the problem. You seem to be talking about criminology studies where I agree that these datasets should be analyzed separately (which already happens). But that's only relevant for the perception of the area and evaluating the impact of new strategies over time. It doesn't really address the practical problems I've raised without crippling the system.

Say you start using an analytical / predictive / big data / AI system for police and criminal justice authorities on January 1st 2018. It bases itself on all the data it has available to it from, say, 2000 - 2017. What would your solution entail over the following months? A police arrest on January 2nd goes into a separate dataset and is effectively kept out of the system? That would kind of defeat the purpose of having the system in the first place. Even at a policy level, I'm not sure how this would work.

I agree with what you're saying as being important for criminological research and to assess the effectiveness of the data, but I don't really see how keeping a separate dataset would work to stop a feedback loop from occuring and from bad conclusions being drawn into the system.


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
My guess is that the computer created generalizations that simplified the process of identifying animals in the quickest way possible. If I am correct, such a thing reminds me of the discussion of how AIs talking to one another "created" a new language by simplifying the syntax of English.
That is indeed what AI's and machine learning systems are intended to do, but the reason I brought up this example is to show how this is prone to errors and misuse. There's dangers in using generalized profiles to make decisions about individual cases.

Basically, the example is that, based on machine learning, this system was supposed to be able to differentiate between dogs and wolves. Dogs, especially some breeds like huskies and Malamutes, can look a whole lot like wolves, making this is a difficult task. As is always the case with these AI's, the system was trained by providing it with a lot of example pictures correctly marked "dog" or "wolf". By looking at the pictures and comparing them, the AI learns to pick up on the characteristics and subtle patterns that make dogs different from wolves. After it's done learning, the AI is then provided with new images so that it can apply the patterns it's learned and differentiate a dog from a wolf. As I already said, this worked well. It even worked so well that it raised suspicion. So what happened?

As you correctly guessed, the system created generalizations of the pictures to make profiles of what a wolf looks like and what a dog looks like. Then, it matches new images to these profiles to see which one is the closest match. This was fully intentional, because the idea was that the system would look at things like color, facial features, body structure and so forth to determine which creature is which. However, and this is the kicker, the AI didn't look at the actual animals. Instead, it learned to look at their surroundings. The computer very quickly figured out that if the animal is surrounded by snow or forest, it's almost always a wolf. If it's surrounded by grass or human structures, it's almost always a dog. So rather than actually comparing the animals to one another, the AI basically whited out the shape of the dog or wolf and only focused on its surroundings to go "background = grass / stone so it's a dog" or "background = snow / forest so it's a wolf".

This is a picture of a study that replicated this. Notice what the AI did? It blanks out the animal, looks at the surroundings, notices the snow and concludes it's a wolf. And in the vast majority of cases, it's entirely correct because it's a statistically proven fact that wolves are more likely to be found in dense foresty and snowy areas while dogs are found around human structures and grassy areas. But in several cases, this is also completely wrong. Wolves do venture out in grassy areas and sometimes can be found around human structures (take a picture of a wolf in a zoo, for example). Likewise, dogs do end up in the snow and foresty areas depending on where they live and where their owner takes them for walks.



I brought this up to further illustrate my example from earlier. There are some serious disparate and negative effects that can come from, as you put it, "noticing trends" and applying them for decision-making without proper safeguards, oversight and mitigation techniques in place, even when they are based on solid, valid and statistically sound facts. And these things can and really do happen, partially because of how difficult it is to assess these systems and pick out flaws. Remember when Google's image recognition software identified black people as gorillas? After several years, we've finally found out what their "solution" is two days ago. Instead of fixing the acutal algorithm, which is a difficult thing to do even for a company like Google, they just removed gorillas from their labelling software altogether and made it into what's effectively an "advanced image recognition tool - for everything other than gorillas" package.

And these things carry huge risks for everyone and in every field. Insurance, loans, job / college applications, renting or buying a place to live, courts, education, law enforcement... The more big data analytics are used (directly or indirectly) to make decisions about individuals or certain groups on the basis of general profiles and statistics, the larger the risk that we're going to see inequality grow and disparate treatments become even further institutionalized. In the eyes of the system, you're not just Zen based on your own merits, personality and past. Instead, you're a mix of 50% Zen and 50% "composite information of what the average Zen is expected to be like based on the actions of thousands of other people". If that latter 50% has a good connotation (white female from the suburbs in a good area with affluent college educated parents without a criminal past, for example), then that's great. But if it has a negative connotation (black male from the poor inner city and a single-parent household with poor credit score, for example), then Zen (no matter how kind, smart, motivated and capable) is going to systematically face more obstacles because the machine will be less likely to accept his job/college application, charge him more for insurance, refuse to grant him loans or rent him houses or appartments in certain areas, mark him as a more likely suspect for crime, subject him to more police surveillance and so on. And I think you're going to have to admit that this isn't fair, even when it's based on actual trends and stastics in our society, and that it can have serious consequences (the feedback loop of all these obstacles making him less likely to become successful which in turn reinforces the idea that people like him are less adequate and should be treated accordingly).

Just to be clear though, I don't think we're really on opposite sides here. The point of this thread was just to get some Serious discussion going on an interesting topic. I don't oppose AI or these analytical systems being used by law enforcement at all. If I did, I wouldn't be working on cutting edge tech to be used by these people. This is just to raise awareness about the issues and bring up some of the potential issues and threats of things like mathwashing.
Last Edit: January 14, 2018, 07:56:53 AM by Flee


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Despite this though, I doubt that anytime in future we will solely rely on AIs.
Also, real quick, the concern here is not that we will solely rely on AIs. The issue is that mathwashing (see the link I posted earlier) will affect the way we view and treat the output provided by intelligent computers. Experts aren't worried about an enormous AI controlling everything and directly making decisions without any human involvement. They're worried that people will blindly trust potentially flawed computers. This isn't about "Zen has applied for a job or a loan > AI analyzes the application > gives negative outcome potentially based on flawed data or analytics > AI rejects application and shelves it". It's about "AI analyzes the application > produces negative advice based on potentially flawed data or analytics > human in the loop (HR person, for example) blindly trusts the judgment of the AI and rejects the application".

And there's a lot of reasons to assume that'll be the case, ranging from confidence (this is an enormously intelligent machine, why doubt it?) to accountability (if you reject the computer and something goes wrong, it's all on you and you're going to take the fall for thinking you're smarter than an AI) to complacency (people trust tech despite not knowing how it works). If you assign kids an online test or homework that is graded by the teaching platform, you're not going to manually check every single answer for every single student to see if the automatic grading system got it right. You're going to trust the system unless someone complains that their grade is wrong. If you're driving down a road and your car's dashboard says you're doing 70 (the speed limit), you're going to trust that the computer is correct and not assume you're only doing 60 and can still go 10 miles faster. And when you arrive at an intersection and the traffic light is red, you're going to trust the computer in the light and stop.

When the first mainframe computers became a thing in the 60s and 70s, companies still had human "computers" (actual term used for these people) whose only job was to run the calculations of the machine and see if they were correct. Now, we put a lot of trust in these machines without thinking twice. Obviously, we have good reason to because they're generally secure, accurate and reliable, but that doesn't mean the same holds true for emerging tech like AI. This isn't about a dystopian AI solely deciding everything with humans being out of the loop entirely. It's about AIs spitting out analytics and advice that are effectively rubberstamped by people who don't even know how it works, who rarely are in a position to go against the AI and who blindly trust its accuracy.
Last Edit: January 14, 2018, 06:39:40 PM by Flee


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Fuck the quadpost / monologue, but I'm going to take some of the stuff I wrote here and reuse it for an article I'm working on. Thanks for the help Zen.


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.


Dietrich Six | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: DietrichSix
IP: Logged

11,506 posts
Excuse me, I'm full of dog poison
Can I get a tldr?


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Can I get a tldr?
Advanced analytical systems and AI are increasingly being used to support policy being drafted and decisions being made about people, including in the area of law enforcement and criminal justice. While very beneficial in several ways, these new technologies also come with risks. The design of the system itself can be flawed, but equally realistic is that "bias" from big data will find its way into the AI that is supposed to learn from it. AIs are created to detect patterns and apply them back into practice. This not only risks decisions about an individual person being made largely on the basis of a profile based on how people like him are expected to act, but also perpetuating current inequalities and problems. If prejudice or other societal factors leads to cops disproportionately targeting black people in "random" vehicle stops and patdowns, an AI learning from police records and arrest data can easily pick up on the relation between race and police encounters. From this data, it can draw the conclusion that black people are more likely to be criminals than whites, and that blacks should therefore be considered as more likely suspects for unsolved crimes or be subject to even more scrutiny. When presented with two identical people of the exact same background and profile (with the only difference being that one is white and the other is black), the police AI will then pick out the black guy as the likely offender because that's what it learned from (potentially biased and flawed) arrest data in the past. This is a major issue as it can decrease social mobility, exacerbate inequality and result in blatantly unfair treatment of people. It's made worse because it's done by a super intelligent computer that people are unlikely to doubt (as they believe it's hard maths and completely objective) and that's very difficult to hold accountable or assess on errors and bias (due to how complex, inaccessible and secretive these systems are).


Dietrich Six | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: DietrichSix
IP: Logged

11,506 posts
Excuse me, I'm full of dog poison
Can I get a tldr?
Advanced analytical systems and AI are increasingly being used to support policy being drafted and decisions being made about people, including in the area of law enforcement and criminal justice. While very beneficial in several ways, these new technologies also come with risks. The design of the system itself can be flawed, but equally realistic is that "bias" from big data will find its way into the AI that is supposed to learn from it. AIs are created to detect patterns and apply them back into practice. This not only risks decisions about an individual person being made largely on the basis of a profile based on how people like him are expected to act, but also perpetuating current inequalities and problems. If prejudice or other societal factors leads to cops disproportionately targeting black people in "random" vehicle stops and patdowns, an AI learning from police records and arrest data can easily pick up on the relation between race and police encounters. From this data, it can draw the conclusion that black people are more likely to be criminals than whites, and that blacks should therefore be considered as more likely suspects for unsolved crimes or be subject to even more scrutiny. When presented with two identical people of the exact same background and profile (with the only difference being that one is white and the other is black), the police AI will then pick out the black guy as the likely offender because that's what it learned from (potentially biased and flawed) arrest data in the past. This is a major issue as it can decrease social mobility, exacerbate inequality and result in blatantly unfair treatment of people. It's made worse because it's done by a super intelligent computer that people are unlikely to doubt (as they believe it's hard maths and completely objective) and that's very difficult to hold accountable or assess on errors and bias (due to how complex, inaccessible and secretive these systems are).

Why are we listening to computers?


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Why are we listening to computers?
We already are. Every time you get into a car you trust computers to tell you how fast you're going and whether it's safe to cross the street when the light's green. Every time you sign into your PC or console and log in to a secure service, you trust that the computer isn't sending your payments to a scammer and your personal information to a hacker. It's just becoming more pervasive.

There's a lot of reasons why this is taking off the way it is. Big data analytics and predictive computing can be used very effectively for a lot of good things. It can detect and predict the spread of infectious diseases before any human could. It can pick up on possible terrorist attacks before they happen. It can pick up on patterns investigators might miss to solve cases and fight crime. It can help businesses and government allocate their resources more effectively and free up precious time and commodities to spend elsewhere. It can automate tasks and improve the economy. It can assist researchers everywhere to map and address the consequences of global warming, polution and international conflict. It can help delivery companies route their trucks better, medical businesses cure diseases faster and cities cut down on littering and traffic accidents more efficiently. It creates fun, new technologies like automatic drones, self-driving cars and image recognition that lets computers identify what is in a picture and improve search engines. There's untold reasons why computers can help us make decisions. Problem is that as with many new things, it's not all safe.


Dietrich Six | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: DietrichSix
IP: Logged

11,506 posts
Excuse me, I'm full of dog poison
Why are we listening to computers?
We already are. Every time you get into a car you trust computers to tell you how fast you're going and whether it's safe to cross the street when the light's green. Every time you sign into your PC or console and log in to a secure service, you trust that the computer isn't sending your payments to a scammer and your personal information to a hacker. It's just becoming more pervasive.

There's a lot of reasons why this is taking off the way it is. Big data analytics and predictive computing can be used very effectively for a lot of good things. It can detect and predict the spread of infectious diseases before any human could. It can pick up on possible terrorist attacks before they happen. It can pick up on patterns investigators might miss to solve cases and fight crime. It can help businesses and government allocate their resources more effectively and free up precious time and commodities to spend elsewhere. It can automate tasks and improve the economy. It can assist researchers everywhere to map and address the consequences of global warming, polution and international conflict. It can help delivery companies route their trucks better, medical businesses cure diseases faster and cities cut down on littering and traffic accidents more efficiently. It creates fun, new technologies like automatic drones, self-driving cars and image recognition that lets computers identify what is in a picture and improve search engines. There's untold reasons why computers can help us make decisions. Problem is that as with many new things, it's not all safe.

All of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.

Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.

Will we never learn flee?


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
Regarding the first section, I feel pretty 50/50 about the implications of here. While we can both agree that this isn't fair, the position of a college doing this is understandable. When it comes to compiling data, outliers shouldn't be taken as the norm of a distribution. If a college found two people of equal qualifications, but one came from a background that had a family of drug abusers, I wouldn't chastise the college for choosing the safer of the two bets. However, like you said, this does create the problem of making social mobility easier for people. Generally, this is why safeguards such as affirmative action has been so commonplace.
I'm glad you're starting to see it this way. Social mobility would be harder* for people, not easier, and potentially have major negative effects for equality and opportunity. The reason it isn't like this now is because the college almost never knows. It can only judge people based on the information about them and a limited amount of data on their background / surroundings. AI and big data can change all of that. Now, a college might deny the candidate in the very rare case of them somehow knowing he's from a drug abusing environment. With an AI relying on untold amounts of data and building these profile for everyone, the college could routinely deny every candidate from such a background, potentially without even really knowing why. Fairness does matter, especially when the possible fallout is this huge. It might be "understandable" for a company in the South not to hire blacks because people would respond better to a white person serving them, but that doesn't mean this is good or acceptable. Denying equal opportunity to people from an unpopular or undesirable background will not only result in heaps of good and adequate individuals missing out on situations they'd perform really well in, but it can also further institutionalize inequality and grow the gap. Systematically keeping people from undesirable, unstable, poor and disenfranchized backgrounds down will only cause them and their profile to be less desirable, less stable, poorer and even more difficult to escape. And while this might sound like some worst case sci-fi scenario, I can give you heaps of studies and reports talking about these consequences and even already finding their disparate impact today.

Quote
With the latter quote, I do feel that much of what we are finding is how much in its infancy AI software is in respect to its potential in the future, a lot of what we will find right now is the hiccups in the system of refining them. Especially right now with the fact that AIs can only work in a series of yes and no answers.
This just adds to my point though. AI is still in its infancy (even though it can definitely do more than just provide yes or no answers), which is exactly why these are important things to consider and regulate now rather than letting it grow up without these issues being addressed.


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
All of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.

Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.

Will we never learn flee?
I agree with the first part but not so much the second. I think AI can be a huge force for good. We just need to be very careful and mindful from this point out. Advanced analytics need to be accountable, transparent and auditable. They need to be able to justify why they arrived at certain outcomes and how they analyzed data. Safeguards, alert mechanisms, supervised and fair learning need to be standard and mandated by law. Independent and technically capable oversight bodies need to have access and sufficient power to scrutinize commercial and governmental dealings. The EU is taking steps towards this with its Resolutions on Big Data and Robotics as well as its new General Data Protection Regulation, but this also needs to catch on in the US (as it has in NYC where the first transparent algorithms bill was recently adopted). We can't shut down these technologies and it's probably not in our best interests to do so either. We shouldn't be overly paranoid and shun them because of unlikely doomsday scenarios, but we should also show some serious restraint and take the proper steps to think this through and mitigate or avoid potentially negative consequences.


Genghis Khan | Heroic Posting Rampage
 
more |
XBL:
PSN:
Steam:
ID: Karjala takaisin
IP: Logged

1,771 posts
 
It seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.


 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
It seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.
But how can the EU turn tyrannical when Muslim immigrants are going to tear down the government and turn the entire area into a barren wasteland controlled by Sharia law in the first place?

Last Edit: January 16, 2018, 03:22:45 PM by Flee


Genghis Khan | Heroic Posting Rampage
 
more |
XBL:
PSN:
Steam:
ID: Karjala takaisin
IP: Logged

1,771 posts
 
It seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.
But how can the EU turn tyrannical when Muslim immigrants are going to tear down the government and turned the entire country into a barren wasteland controlled by Sharia law in the first place?




 
 
Flee
| Marty Forum Ninja
 
more |
XBL:
PSN:
Steam:
ID: Flee
IP: Logged

15,477 posts
 
It seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.
But how can the EU turn tyrannical when Muslim immigrants are going to tear down the government and turned the entire country into a barren wasteland controlled by Sharia law in the first place?





rC | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: RC5908
IP: Logged

10,559 posts
ayy lmao
All of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.

Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.

Will we never learn flee?
I agree with the first part but not so much the second. I think AI can be a huge force for good. We just need to be very careful and mindful from this point out. Advanced analytics need to be accountable, transparent and auditable. They need to be able to justify why they arrived at certain outcomes and how they analyzed data. Safeguards, alert mechanisms, supervised and fair learning need to be standard and mandated by law. Independent and technically capable oversight bodies need to have access and sufficient power to scrutinize commercial and governmental dealings. The EU is taking steps towards this with its Resolutions on Big Data and Robotics as well as its new General Data Protection Regulation, but this also needs to catch on in the US (as it has in NYC where the first transparent algorithms bill was recently adopted). We can't shut down these technologies and it's probably not in our best interests to do so either. We shouldn't be overly paranoid and shun them because of unlikely doomsday scenarios, but we should also show some serious restraint and take the proper steps to think this through and mitigate or avoid potentially negative consequences.
So basically, FOSS software will save humanity?

/g/ was right all along


Dietrich Six | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: DietrichSix
IP: Logged

11,506 posts
Excuse me, I'm full of dog poison
All of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.

Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.

Will we never learn flee?
I agree with the first part but not so much the second. I think AI can be a huge force for good. We just need to be very careful and mindful from this point out. Advanced analytics need to be accountable, transparent and auditable. They need to be able to justify why they arrived at certain outcomes and how they analyzed data. Safeguards, alert mechanisms, supervised and fair learning need to be standard and mandated by law. Independent and technically capable oversight bodies need to have access and sufficient power to scrutinize commercial and governmental dealings. The EU is taking steps towards this with its Resolutions on Big Data and Robotics as well as its new General Data Protection Regulation, but this also needs to catch on in the US (as it has in NYC where the first transparent algorithms bill was recently adopted). We can't shut down these technologies and it's probably not in our best interests to do so either. We shouldn't be overly paranoid and shun them because of unlikely doomsday scenarios, but we should also show some serious restraint and take the proper steps to think this through and mitigate or avoid potentially negative consequences.

When you play God you just ensure you meet him sooner.


N/A | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Zenmaster
IP: Logged

7,870 posts
 
This user has been blacklisted from posting on the forums. Until the blacklist is lifted, all posts made by this user have been hidden and require a Sep7agon® SecondClass Premium Membership to view.