The rise of big data policing rests in part on the belief that data-based decisions can be more objective, fair, and accurate than traditional policing.Data is data and thus, the thinking goes, not subject to the same subjective errors as human decision making. But in truth, algorithms encode both error and bias. As David Vladeck, the former director of the Bureau of Consumer Protection at the Federal Trade Commission (who was, thus, in charge of much of the law surrounding big data consumer protection), once warned, "Algorithms may also be imperfect decisional tools. Algorithms themselves are designed by humans, leaving open the possibility that unrecognized human bias may taint the process. And algorithms are no better than the data they process, and we know that much of that data may be unreliable, outdated, or reflect bias."Algorithmic technologies that aid law enforcement in targeting crime must compete with a host of very human questions. What data goes into the computer model? After all, the inputs determine the outputs. How much data must go into the model? The choice of sample size can alter the outcome. How do you account for cultural differences? Sometimes algorithms try to smooth out the anomalies in the data—anomalies that can correspond with minority populations. How do you address the complexity in the data or the "noise" that results from imperfect results?Sometimes, the machines get it wrong because of racial or gender bias built into the model. For policing, this is a serious concern. [...]As Frank Pasquale has written in his acclaimed book The Black Box Society, "Algorithms are not immune from the fundamental problem of discrimination, in which negative and baseless assumptions congeal into prejudice. . . . And they must often use data laced with all-too-human prejudice."Inputs go in and generalizations come out, so that if historical crime data shows that robberies happen at banks more often than at nursery schools, the algorithm will correlate banks with robberies, without any need to understand that banks hold lots of cash and nursery schools do not. "Why" does not matter to the math. The correlation is the key. Of course, algorithms can replicate past biases, so that if an algorithm is built around biased data, analysts will get a biased result. For example, if police primarily arrest people of color from minority neighborhoods for marijuana, even though people of all races and all neighborhoods use marijuana at equal rates, the algorithm will correlate race with marijuana use.The algorithm will also correlate marijuana with certain locations. A policing strategy based on such an algorithm will correlate race and drugs, even though the correlation does not accurately reflect the actual underlying criminal activity across society. And even if race were completely stripped out of the model, the correlation with communities of color might still remain because of the location. A proxy for racial bias can be baked into the system, even without any formal focus on race as a variable. [...]As mathematician Jeremy Kun has written, "It’s true that an algorithm itself is quantitative—it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups."Big data policing involves a similar danger of perpetrating structural racism and negative biases about minority groups. "How" we target impacts "whom" we target, and underlying existing racial biases means that data-driven policing may well reflect those biases.
>minority communities tend to be less affluent, not as well educated, and does not provide as much of a chance of social upward mobility >minority communities as a result have a higher crime rate >crime data being taken in acknowledges those areas are committing an disproportionate amount of crime in comparison to other areas...>so this means the technology is biased for noticing trends that occur among different racial communities Really?
Sorry for the late reply by the way, I just got home recently, started writing, and got logged off by accident because I forgot to click the keep me logged on button.
IRegardless, even with the systematic bias towards male teachers, I can say from my own personal anecdotes that metric has not as great of an effect as you think.
I understand what you are saying about the feedback loops. However, to address that concern, this information should not be treated as the same data set, rather a subset of that data set. Like I discussed from my previous post, when an area is provided with an more intensive treatment for the purpose of remedying the difference between that area and the norm, the data should be used for analysis of how the area is improving over time. Think about it like a science experiment, the area is receiving a new variable into its equation (the increased police presence). Treating the area like the other areas is what supports that feedback loop.
My guess is that the computer created generalizations that simplified the process of identifying animals in the quickest way possible. If I am correct, such a thing reminds me of the discussion of how AIs talking to one another "created" a new language by simplifying the syntax of English.
Despite this though, I doubt that anytime in future we will solely rely on AIs.
Can I get a tldr?
Quote from: Dietrich Six on January 14, 2018, 11:43:36 PMCan I get a tldr?Advanced analytical systems and AI are increasingly being used to support policy being drafted and decisions being made about people, including in the area of law enforcement and criminal justice. While very beneficial in several ways, these new technologies also come with risks. The design of the system itself can be flawed, but equally realistic is that "bias" from big data will find its way into the AI that is supposed to learn from it. AIs are created to detect patterns and apply them back into practice. This not only risks decisions about an individual person being made largely on the basis of a profile based on how people like him are expected to act, but also perpetuating current inequalities and problems. If prejudice or other societal factors leads to cops disproportionately targeting black people in "random" vehicle stops and patdowns, an AI learning from police records and arrest data can easily pick up on the relation between race and police encounters. From this data, it can draw the conclusion that black people are more likely to be criminals than whites, and that blacks should therefore be considered as more likely suspects for unsolved crimes or be subject to even more scrutiny. When presented with two identical people of the exact same background and profile (with the only difference being that one is white and the other is black), the police AI will then pick out the black guy as the likely offender because that's what it learned from (potentially biased and flawed) arrest data in the past. This is a major issue as it can decrease social mobility, exacerbate inequality and result in blatantly unfair treatment of people. It's made worse because it's done by a super intelligent computer that people are unlikely to doubt (as they believe it's hard maths and completely objective) and that's very difficult to hold accountable or assess on errors and bias (due to how complex, inaccessible and secretive these systems are).
Why are we listening to computers?
Quote from: Dietrich Six on January 15, 2018, 06:18:21 PMWhy are we listening to computers?We already are. Every time you get into a car you trust computers to tell you how fast you're going and whether it's safe to cross the street when the light's green. Every time you sign into your PC or console and log in to a secure service, you trust that the computer isn't sending your payments to a scammer and your personal information to a hacker. It's just becoming more pervasive.There's a lot of reasons why this is taking off the way it is. Big data analytics and predictive computing can be used very effectively for a lot of good things. It can detect and predict the spread of infectious diseases before any human could. It can pick up on possible terrorist attacks before they happen. It can pick up on patterns investigators might miss to solve cases and fight crime. It can help businesses and government allocate their resources more effectively and free up precious time and commodities to spend elsewhere. It can automate tasks and improve the economy. It can assist researchers everywhere to map and address the consequences of global warming, polution and international conflict. It can help delivery companies route their trucks better, medical businesses cure diseases faster and cities cut down on littering and traffic accidents more efficiently. It creates fun, new technologies like automatic drones, self-driving cars and image recognition that lets computers identify what is in a picture and improve search engines. There's untold reasons why computers can help us make decisions. Problem is that as with many new things, it's not all safe.
Regarding the first section, I feel pretty 50/50 about the implications of here. While we can both agree that this isn't fair, the position of a college doing this is understandable. When it comes to compiling data, outliers shouldn't be taken as the norm of a distribution. If a college found two people of equal qualifications, but one came from a background that had a family of drug abusers, I wouldn't chastise the college for choosing the safer of the two bets. However, like you said, this does create the problem of making social mobility easier for people. Generally, this is why safeguards such as affirmative action has been so commonplace.
With the latter quote, I do feel that much of what we are finding is how much in its infancy AI software is in respect to its potential in the future, a lot of what we will find right now is the hiccups in the system of refining them. Especially right now with the fact that AIs can only work in a series of yes and no answers.
All of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.Will we never learn flee?
It seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.
Quote from: Genghis Khan on January 16, 2018, 03:05:57 PMIt seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.But how can the EU turn tyrannical when Muslim immigrants are going to tear down the government and turned the entire country into a barren wasteland controlled by Sharia law in the first place?
Quote from: Flee on January 16, 2018, 03:19:19 PMQuote from: Genghis Khan on January 16, 2018, 03:05:57 PMIt seems there is no back up plan if the EU turns into a fourth Reich. In the US people have guns if the government starts oppressing people.But how can the EU turn tyrannical when Muslim immigrants are going to tear down the government and turned the entire country into a barren wasteland controlled by Sharia law in the first place?
Quote from: Dietrich Six on January 15, 2018, 06:56:22 PMAll of these things use data that has been collected from humans and therefore imperfect. The real problem is that machines can't feel like humans can and will likely go to extremes that humans recognize as unsafe or irresponsible.Artificial intelligence will likely be the downfall of mankind and I for one do not welcome our circuited overlords.Will we never learn flee?I agree with the first part but not so much the second. I think AI can be a huge force for good. We just need to be very careful and mindful from this point out. Advanced analytics need to be accountable, transparent and auditable. They need to be able to justify why they arrived at certain outcomes and how they analyzed data. Safeguards, alert mechanisms, supervised and fair learning need to be standard and mandated by law. Independent and technically capable oversight bodies need to have access and sufficient power to scrutinize commercial and governmental dealings. The EU is taking steps towards this with its Resolutions on Big Data and Robotics as well as its new General Data Protection Regulation, but this also needs to catch on in the US (as it has in NYC where the first transparent algorithms bill was recently adopted). We can't shut down these technologies and it's probably not in our best interests to do so either. We shouldn't be overly paranoid and shun them because of unlikely doomsday scenarios, but we should also show some serious restraint and take the proper steps to think this through and mitigate or avoid potentially negative consequences.