Official Wikipedia articles thread

 
Naru
| The Tide Caller
 
more |
XBL: Naru No Baka
PSN:
Steam: The Tide Caller
ID: GasaiYuno
IP: Logged

18,501 posts
The Rage....
Because Slash locked the other one like a furfag.

GO GO GO


R o c k e t | Mythic Smash Master
 
more |
XBL: Rocketman287
PSN:
Steam: Rocketman287
ID: Rocketman287
IP: Logged

22,974 posts
I neither fear, nor despise.
Kyle Katarn is a fictional character in the Star Wars Expanded Universe, who appears in the five video games of the Jedi Knight series, the video game Star Wars: Lethal Alliance, and in several books and other material. In the Jedi Knight series, Katarn is the protagonist of Star Wars: Dark Forces and Star Wars Jedi Knight: Dark Forces II, one of two playable characters in Star Wars Jedi Knight: Mysteries of the Sith, the protagonist of Star Wars Jedi Knight II: Jedi Outcast and a major NPC in Star Wars Jedi Knight: Jedi Academy.
Katarn was originally a member of the Galactic Empire, before becoming a mercenary for hire. He regularly worked for the Rebel Alliance and later became a member of the New Republic as well as a skilled Jedi and an instructor at the Jedi Academy, second only to Luke Skywalker.
Katarn has been well received by most critics, with GameSpot including him in a vote for the greatest video game character of all time, where he was eliminated in round two, when faced against Lara Croft.[1]
Contents  [hide]
1 Appearances
1.1 Jedi Knight series
1.2 Star Wars literature
1.3 Other appearances
2 Development and depiction
3 Reception
4 References
5 External links
Appearances[edit]
Jedi Knight series[edit]
Katarn first appeared in Star Wars: Dark Forces, where he was introduced as a former Imperial officer who became a mercenary-for-hire after learning the Empire was responsible for the death of his parents.[2] As a mercenary, he regularly worked for the Rebel Alliance, where he was secretly dispatched by Mon Mothma on missions deemed too dangerous or sensitive for actual Rebel operatives. The game begins shortly before the events of the film A New Hope, with Katarn single-handedly infiltrating an Imperial facility on the planet Danuta to retrieve the plans for the first Death Star. The plans would eventually be forwarded to Princess Leia, leading to the destruction of the Death Star.[3] One year later, Katarn is employed to investigate the "Dark Trooper" project, a secret Imperial research initiative manufacturing powerful robotic stormtroopers to attack Alliance strongholds. After several adventures (including encounters with Jabba the Hutt and Boba Fett), Katarn terminates the Dark Trooper Project and kills its creator, General Rom Mohc, aboard his flagship, the Arc Hammer.[4]
Star Wars Jedi Knight: Dark Forces II takes place one year after the events of the film, Return of the Jedi.[5] It begins with 8t88, an information droid, telling Katarn about the Dark Jedi Jerec, who killed Katarn's father, Morgan, in his efforts to find the Valley of the Jedi, a focal point for Jedi power and a Jedi burial ground. 8t88 also tells Katarn of a data disk recovered from Morgan after his death which can only be translated by a droid in Morgan's home. After 8t88 leaves Katarn to be killed, Katarn escapes, tracks down 8t88 and recovers the disk. He then heads to his home planet of Sulon and has the disk translated. The disk contains a message from Morgan, telling Katarn his must pursue the ways of the Jedi, and giving him a lightsaber. Katarn also learns that seven Dark Jedi are attempting to use the power found in the Valley to rebuild the Empire. Kyle eventually kills all seven Dark Jedi and saves the Valley.[6]
Star Wars Jedi Knight: Mysteries of the Sith, an expansion pack for Dark Forces II, takes place approximately five years later.[7] The game focuses on former Imperial assassin Mara Jade, who has come under Kyle's tutelage as she trains to be a Jedi. During this period, while investigating Sith ruins on Dromund Kaas, Kyle comes under the influence of the Dark Side of the Force, but Jade is able to turn him back to the Light.[8]
Star Wars Jedi Knight II: Jedi Outcast is set three years after Mysteries of the Sith.[9] Feeling vulnerable to another fall to the Dark Side, Kyle has chosen to forsake the Force and has returned to his former mercenary ways.[10] Whilst on a mission for Mon Mothma, Kyle's partner, Jan Ors is apparently murdered at the hands of two Dark Jedi, Desann and Tavion. Determined to avenge her death, Katarn returns to the Valley of the Jedi to regain his connection to the Force. Taking back his lightsaber from Luke Skywalker, he sets out to track down Desann. After escaping from a trap with Lando Calrissian's help, Katarn heads to Cloud City and interrogates Tavion, who tells him that Jan is not dead at all. Desann simply pretended to kill her knowing Katarn would return to the Valley, at which point Desann followed him so as to infuse his soldiers with the Force and reinstall the Imperial Remnant as rulers of the galaxy. Katarn spares Tavion's life and stows away on Desann's ship, the Doomgiver. After rescuing Jan, Katarn defeats the military scientist, Galak Fyyar, who tells him that Desann plans to use his Jedi infused soldiers to attack the Jedi Academy on Yavin IV. Katarn enters the Academy and defeats Desann. After the battle, he tells Luke Skywalker that he is going to stay a Jedi, confident of his strength and dedication to the Light Side.[11]
Star Wars Jedi Knight: Jedi Academy takes place a year after Jedi Outcast,[12] and is the first game in the series in which Katarn is not a playable character. The game begins as he is appointed master of two new students, Jaden Korr and Rosh Penin. Rosh soon begins to feel held back and comes to resent Katarn. It is soon discovered that a Sith cult named the Disciples of Ragnos are stealing Force energy from various locations across the galaxy via a scepter. Along with others, Katarn and his students embark on a number of missions in an effort to discover what the cult are hoping to do with the powers they steal. During one such mission, while investigating the ruins of the planet formerly known as Byss, Rosh is captured and converted to the Dark Side by the cult's leader, Tavion (Desann's former apprentice). Jaden and Katarn escape and conclude that Tavion is storing the stolen Dark Force energy in the scepter in order to use it to resurrect an ancient Sith master, Marka Ragnos.[13] After receiving a distress message from Rosh, who has returned to the Light Side and is now a prisoner, Katarn and Jaden go to rescue him, only to discover that the distress signal was a scheme to lure the two in. After defeating Rosh, Jaden is confronted with a choice: kill him and turn to the Dark Side, or spare him and remain on the Light Side. If the player kills Rosh, the game ends with Jaden killing Tavion, taking the sceptre and fleeing, with Katarn heading out in pursuit. If the player chooses not to kill Rosh, the game ends with Jaden killing Tavion and defeating the spirit of Ragnos.[14]
Star Wars literature[edit]
In The New Jedi Order series of novels, Katarn becomes the Jedi Academy's foremost battlemaster, a close friend of Luke Skywalker, and a respected Jedi Master. During the Yuuzhan Vong invasion, Katarn helps develop strategies to use against the invaders, and participates in the rescue of human captives from the Imperial Remnant world Ord Sedra. Near the end of the war, the living planet Zonama Sekot agrees to help the Republic; Katarn is one of several Jedi Knights who bonds to seed-partners and is provided with Sekotan starships to use in Sekot's defence.[15]
During Troy Denning's The Dark Nest trilogy (The Joiner King, The Unseen Queen and The Swarm War), Katarn is one of four Jedi Masters who attempts to destroy the Dark Nest. Katarn also speaks his mind during a Master's Council session, where he stands up to Chief of State Cal Omas. He, along with Corran Horn and other Masters, believe that Jaina Solo and Zekk could be the next leaders of the Dark Nest. In The Swarm War (the final part of the trilogy), Katarn leads a squadron of Jedi Stealth X's against the Killiks.[16]
Katarn also appears in Karen Traviss' Legacy of the Force novels Bloodlines, Sacrifice, Exile and Fury, as a Jedi Master participating in Council meetings. In Bloodlines, he helps to point out the "embarrassment" to the Jedi Order of Jacen Solo's actions in apprehending Corellians on Coruscant.[17] In Exile, he plays devil's advocate regarding Leia Organa's supposed betrayal of the Galactic Alliance, although he reasserts his loyalty to Leia by being the first to formally declare his faith in her at the meeting's conclusion.[18] Katarn plays a much larger role in Fury, leading a team of Jedi against Jacen Solo in a capture-or-kill mission. After a fierce four-way lightsaber duel, Katarn is severely wounded and the mission ends in failure.[19]
Other appearances[edit]
Katarn's adventures are also told in three hardcover graphic story albums written by William C. Dietz, which were adapted into audio dramatizations; Soldier for the Empire, Rebel Agent and Jedi Knight.[3][15]
Katarn also appears in the Star Wars Roleplaying Game and is a premiere figure of "The New Jedi Order" faction in the Wizards of the Coast Star Wars Miniatures. The Wizards of the Coast web series, The Dark Forces Saga, highlights his background, as well as those of most of the other heroes and villains found in the games.
He also appears in the video game Star Wars: Empire at War, where he can be used in the 'Skirmish' battle mode as a special 'hero' unit. The game is set between Episode III and Episode IV, and, as such, Katarn cannot use force powers.[20]
The popularity of characters from Dark Forces resulted in LucasArts licensing toys based on the game. Hasbro produced Kyle Katarn and Dark Trooper toys, which are among the few Expanded Universe items to be turned into action figures.[21]
Development and depiction[edit]
Originally, the protagonist of Dark Forces was to be Luke Skywalker. However the developers of the game realized that this would add constraints to gameplay and storyline, and instead a new character, Kyle Katarn, was created.[3] For Jedi Academy, an early decision made during development was whether or not to have Kyle Katarn as the playable character. This was due to the character already being a powerful Jedi Knight, and, as such, starting off with the force skills would affect the gameplay.[22] To resolve this issue, the developers chose to make the playable character a student in the Jedi Academy. Katarn was then made an instructor in the academy and integral to the plot to ensure that Jedi Academy built upon the existing Jedi Knight series storyline.[22]
Katarn was voiced by Nick Jameson in Star Wars: Dark Forces. He was portrayed by Jason Court in the full motion video sequences of Dark Forces II. The in-game model was modeled after Court to maintain consistency. In Mysteries of Sith, Jedi Outcast and Jedi Academy, Katarn's appearance is exclusively a polygonal model, without any FMV scenes, in which he is designed to look like a slightly older Court. In Mysteries of the Sith, he is voiced by Rino Romano, and in the two subsequent games by Jeff Bennett. For the audio dramatizations, he is portrayed by Randal Berger.[23] In Pendant Productions' Blue Harvest, Katarn is voiced by Scott Barry.[24]
Reception[edit]
GameDaily's Robert Workman listed Katarn as one of his favourite Star Wars video game characters.[25] IGN placed him as their 22nd top Star Wars character, praising him as "a gamer's reliable blank state," a feature which they felt made him one of the most "human" Star Wars characters. They also stated that Katarn's endearence with fans was because of his "mishmash of quirks and dispositions."[26] In 2009, IGN's Jesse Schedeen argued that the character should not appear in the then upcoming Star Wars live-action TV series, feeling that "Katarn isn't very interesting without his Jedi abilities," and that deeply exploring his past was not really warranted.[27] Schedeen also included Katarn as one of his favourite Star Wars heroes and video game sword masters.[28][29] In GameSpot's vote for the all time greatest videogame hero, Katarn was eliminated in round two when faced against Lara Croft, garnering 27.5% of the votes.[1] In round one he defeated Dig Dug, with 67.6% of the votes.[30]
On the other hand, GamesRadar was critical of Katarn, calling him the third worst character in video gaming, saying "he's bearded, he's boring, he's bland and his name is Kyle Katarn," comparing his outfit to that of a "beige-obsessed disco cowboy." They also commented that while "originally a genuinely interesting character in the Han Solo mold," they felt that the character had become "emotionless" after he gained force powers.[31]


 
SecondClass
| Carmen
 
more |
XBL:
PSN: ModernLocust
Steam:
ID: SecondClass
IP: Logged

30,001 posts
"With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably."
—Judge Aaron Satie
——Carmen
FBI Special Agent Dale Bartholomew Cooper, portrayed by Kyle MacLachlan, is a fictional character and the protagonist of the ABC television series Twin Peaks. He briefly appears in the prequel film Twin Peaks: Fire Walk with Me.
Cooper is an eccentric FBI agent who arrives in Twin Peaks in 1989 to investigate the brutal murder of the popular high school student, Laura Palmer, falling in love with the town and gaining a great deal of acceptance within the tightly knit community. He displays an array of quirky mannerisms such as giving a 'thumbs up' when satisfied, sage-like sayings, and distinctive sense of humor along with his love for a good cherry pie and a "damn fine cup of coffee" (which he takes black). One of his most popular habits is recording spoken-word tapes to a mysterious woman called 'Diane' into his microcassette recorder that he always carries with him, that often contain everyday observations and thoughts on his current case.
Contents  [hide]
1 Concept and creation
2 Character arc
2.1 Relationships
3 In other media
3.1 The Autobiography of F.B.I. Special Agent Dale Cooper: My Life, My Tapes
3.2 "Diane..." - The Twin Peaks Tapes of Agent Cooper (audio book)
3.3 "Saturday Night Live Sketch"
4 References
5 External links
Concept and creation[edit]
Lynch named Cooper in reference to D. B. Cooper, an unidentified man who hijacked a Boeing 727 aircraft on November 24, 1971.[1]
MacLachlan has stated that he views Cooper as an older version of his character in Blue Velvet, a previous David Lynch collaboration. "I see my character as Jeffrey Beaumont grown up. Instead of being acted upon, he has command on the world."[2]
Character arc[edit]
Born on April 19, 1954, Cooper is a graduate of Haverford College. He is also revealed to be something of an introspective personality, due to his profound interest in the mystical, particularly in Tibet and Native American mythology. Much of his work is based on intuition and even dreams; this is in contrast to other fictional detectives who use logic to solve their cases. On joining the Federal Bureau of Investigation, Dale Cooper was based at the Bureau offices in Philadelphia. It was here Cooper was partnered with the older Windom Earle. At some point, Cooper would be placed under the authority of FBI Chief Gordon Cole, which sometimes meant being handed the mysterious 'Blue Rose' cases. Some time after Cooper joined the Bureau, Earle's wife, Caroline, was a witness to a federal crime. Earle and Cooper were assigned to protect her, and it was around this time that Cooper began an affair with Caroline. However, one night, whilst in Pittsburgh, Cooper let his guard down - and Caroline was murdered by her husband. Cooper's former partner had "lost his mind", and was subsequently sent to a mental institution. Cooper was absolutely devastated by the loss of the woman he would later refer to as the love of his life, and swore to never again get involved with someone who was a part of a case to which he was assigned.
Three years before his arrival in Twin Peaks, Cooper has a dream involving the plight of the Tibetan people, and revealed to him the deductive technique of the Tibetan method. Deeply moved by what he saw in this dream, it is indicated it was this event that formed the basis of his unconventional methods of investigation. Cooper reveals to his boss, Cole, of the portents of a strange dream. While in the meantime, Special Agent Chester Desmond disappeared while investigating a bizarre murder case. Cooper picks up the case, but is unable to find any evidence which could lead to the discovery of what happened to Desmond or Theresa Banks, the murder victim. Roughly a year later, in 1989, Cooper tells Rosenfield in the Philadelphia offices of how he senses Banks' killer will strike again soon, and that his victim will be a young woman, who has blonde hair, is sexually active, using drugs, and is crying out for help. Rosenfield is quick to dismiss Cooper's notion.
On February 24, 1989, Cooper comes to the town of Twin Peaks to investigate the murder of Laura Palmer. He eventually helps the Twin Peaks Sheriff's Department in investigating other cases as well. While in Twin Peaks, he learns of the mysterious places called the Black Lodge and the White Lodge and the spirits inhabiting them. In the final episode of Twin Peaks, Cooper enters the evil Black Lodge to rescue his love interest, Annie Blackburn. In the Black Lodge, he encounters his evil doppelganger, who eventually leaves the Black Lodge while Cooper remains there, his ultimate fate unknown.
The feature film Twin Peaks: Fire Walk with Me subtly expands on the events of Cooper's fate in the series finale, while at the same time functioning as a prequel that details the last week of Laura Palmer's life. At one point while experiencing a strange dream involving the Black Lodge and its residents, in the non-linear realm Laura encounters Cooper at a point after he has become trapped there. Cooper implores her not to take "the ring", a mysterious object that gives its wearer a sort of connection to the Black Lodge. Shortly thereafter, Laura also has a vision of a bloody Annie Blackburn beside her in her bed, who tells her: "My name is Annie. I've been with Dale and Laura. The good Dale is in the Lodge, and he can't leave. Write it in your diary." (While it is unknown if Laura did in fact transcribe this to the diary in her possession at the time, one of Twin Peaks' head writers, Harley Peyton, suggested in a later interview that she did. Reconstructing interviews from those intimately involved in the TV series seems to confirm that a Season Three story arc would have included the finding of Laura's diary entry and a rescue mission, headed by Major Briggs, to retrieve Cooper from the Black Lodge.)
At the film's conclusion, Laura's spirit sits in the Red Room, and is looking up at Cooper whose hand is resting on her shoulder, and is smiling at her. Shortly thereafter, Laura's angel appears and the film ends. Although the film's final image of Laura cast in white indicates that she has ascended to the White Lodge, the meaning behind Cooper's presence alongside her, and indeed, his ultimate fate—if he ever escaped the Black Lodge—is unknown.
Relationships[edit]
Much like how he relates to the town itself, Cooper gains an instant rapport with many of the townspeople on arrival to Twin Peaks - most particularly Sheriff Harry S. Truman and his deputies, Deputy Tommy "Hawk" Hill and Deputy Andy Brennan. While Truman is initially skeptical of Cooper's unconventional investigation methods and other-worldly ideas, he is most often willing to accept Cooper's judgment. (Even referring to Cooper as "the finest lawman I have ever known" to agents investigating Cooper's alleged drug-running to Canada). Over time, a deep bond emerges between the two, as displayed in various scenes: when Truman assists Cooper in rescuing Audrey Horne from One-Eyed Jack's, Truman deputizing Cooper following Cooper's suspension from the Bureau, and Truman waiting patiently for two days at Glastonbury Grove for Cooper to emerge from the Black Lodge in the series finale.
Cooper's strongest relationship outside of the townspeople is that of his friendship with his colleague, Agent Albert Rosenfield. Though he has strong respect and admiration for Rosenfield's medical skills, and is seemingly not intimidated by Rosenfield's sarcastic manner, he has little tolerance or patience for Rosenfield's treatment of the town's citizens - most particularly his animosity towards Sheriff Truman (which notably thaws over time).
Prior to Twin Peaks, Cooper's strongest romantic relationship was his affair with Caroline Earle, the wife of his former partner, Windom Earle. Caroline had been under Cooper and Earle's protection for witnessing a federal crime Earle committed when he lost his mind, but on one night when Cooper's guard was down, Caroline was murdered by Windom. Caroline's death and his failure to protect her continues to haunt Cooper on his arrival to Twin Peaks, referring to a "broken heart" when discussing women with Truman and his deputies. He also relates a version of the story of Caroline to the teenage Audrey Horne.
On arrival to Twin Peaks, Cooper becomes quickly aware that 18-year-old Audrey Horne, the daughter of local businessman Benjamin Horne, harbors a crush on him. The attraction appears mutual, as Cooper is clearly drawn to Audrey - but he is quick to rebuff her advances when Audrey turns up in his hotel bed. Cooper explains she is too young, but he does genuinely want to be her friend. However, following her disappearance orchestrated by Jean Renault, Cooper privately confesses to Diane that in Audrey's absence all he can think of is her smile. Following her rescue, there remains a close and affectionate friendship with the two, most notably when Audrey arrives to his hotel room for comfort following her father's arrest and her sad farewell when she believes Cooper is leaving Twin Peaks for good. Audrey later gives Cooper a surprising kiss when she discovers evidence that clears him of drug charges, and they later dance at the Milford wedding.
However, during the production of the series' second season, Kyle MacLachlan (as he notes during an interview on the 2007 Gold Edition Twin Peaks DVD set) vetoed the possibility of a romantic relationship, as he felt his character should not sleep with a high school girl.[citation needed] Following the series' cancellation, it is often said by the writers that the Cooper-Audrey relationship was to be the main plot following the resolution of the Laura Palmer murder mystery - forcing them to focus more on the supporting characters.[citation needed]
Following his reinstatement to the FBI, Cooper meets Annie Blackburn, the sister of Norma Jennings, whom he instantly falls in love with. Annie is established as being a kindred spirit, experiencing the world with curiosity and wonder. Much like Cooper's pain over Caroline Earle, Annie also nurses a broken heart from someone in her past. (Which is implied may have resulted in suicide attempts, and affected her decision to later attend a convent.) Cooper helps her to prepare for participation in the Miss Twin Peaks contest. However, during the contest she is kidnapped by Windom Earle and taken to the Black Lodge to use her 'fear' to open the gateway.
In other media[edit]
During the second season of Twin Peaks, Simon & Schuster's Pocket Books division released several official tie-in publications, each written either by its creators or members of their family, which offer a wealth of character back-stories; Cooper's, in two such publications, is one of the best-developed of these back-stories.
The Autobiography of F.B.I. Special Agent Dale Cooper: My Life, My Tapes[edit]
Many of the details of Cooper's history as previously cited are drawn from a book that producer Mark Frost's brother Scott Frost wrote as a companion to the series, titled The Autobiography of F.B.I. Special Agent Dale Cooper: My Life, My Tapes. The book is catalogued as ISBN 978-0-330-27280-3.
"Diane..." - The Twin Peaks Tapes of Agent Cooper (audio book)[edit]
Early in the second season of Twin Peaks, Simon & Schuster Audio released Diane ... The Twin Peaks Tapes of Agent Cooper, a cassette-only release that Kyle MacLachlan also performed. The tape consists of newly recorded messages from Cooper to his never-seen assistant, Diane, mixed in with monologues from the original broadcasts. The tape begins with a prologue monologue in which Cooper discusses his pending trip to Twin Peaks, continues with the initial monologue heard in the pilot, and continues to a point after his recovery from being shot. For his work on this release, MacLachlan was nominated for a Grammy Award for best spoken-word performance.
"Saturday Night Live Sketch"[edit]
When Kyle Maclachlan guest hosted Saturday Night Live in 1990 at the height of Twin Peaks '​ popularity, the episode contained many references to the series throughout. Also featured was a sketch parodying the show and in particular Dale Cooper. Cooper is portrayed in the sketch as being extremely attentive to detail in his messages to Diane, including informing her of how many hairs he found in his shower the night before. Sheriff Harry S. Truman (Kevin Nealon) then visits Cooper, telling him that Leo Johnson (Chris Farley) has confessed to the murder of Laura Palmer and that he can go home. Cooper raises concerns that the investigations may not be over because he had a dream last night in which "A hairless mouse with a pitchfork sang a song about caves." He discards Leo's confession in spite of the overwhelming evidence. He is then visited by several Twin Peaks residents all played by SNL cast members: Audrey Horne, played by Victoria Jackson, who gives Cooper a going away gift and ties the ribbon with her tongue; Leland Palmer, played by Phil Hartman, who requests that Cooper dance with him; Nadine Hurley (Jan Hooks), who wants Cooper to take her silent drape runners to the patent office; The Log Lady, also played by Hooks, following Truman's observation that there were only two female SNL cast members; and finally Leo in custody of Deputy Andy Brennan (Conan O'Brien). Cooper protests that the case can't be over so soon and insists in vain that he and Truman perform several pointless tasks in order to aid him in the already solved investigation, including going to a graveyard at midnight disguised as altar boys. As everyone begins to leave, Cooper declares that they can't leave because they still don't know who shot him at the end of Season One. Leo, however, confesses to shooting Cooper, adding that Cooper himself saw him do it. Cooper reluctantly goes to bed as The Man from Another Place (Mike Myers) begins to dance at the foot of his bed.
References[edit]
Jump up ^ Davis, Jeff; Al Eufrasio; Mark Moran (2008). Weird Washington. Sterling Publishing Company, Inc. p. 65. ISBN 978-1-4027-4545-4. OCLC 179788749.
Jump up ^ Woodward, Richard B. (January 14, 1990). "A DARK LENS ON AMERICA". The New York Times. davidlynch.de. Retrieved October 29, 2012.
External links[edit]
Agent Cooper Twin Peaks card
Twin Peaks Saturday Night Live Sketch


Magos Domina | Heroic Invincible!
 
more |
XBL:
PSN:
Steam:
ID: Kiyohime
IP: Logged

6,711 posts
01001001 01101101 00100000 01100111 01101111 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01110100 01101000 01110010 01101111 01110111 00100000 01100001 00100000 01110011 01110000 01101001 01100100 01100101 01110010 00100000 01100001 01110100 00100000 01111001 01101111 01110101
This article is about the colour. For other uses, see Blue (disambiguation).
Page semi-protected
BlueHow to read this color infobox
Color icon blue.svg
Spectral coordinates
Wavelength    450–495 nm
Frequency    ~670–610 THz
About these coordinates     Colour coordinates
Hex triplet    #0000FF
sRGBB  (r, g, b)    (0, 0, 255)
HSV       (h, s, v)    (240°, 100%, 100%)
Source    HTML/CSS[1]
B: Normalized to [0–255] (byte)

Blue is the colour of the clear sky and the deep sea.[2][3] It is located between violet and green on the optical spectrum.[4]

Surveys in the U.S. and Europe show that blue is the colour most commonly associated with harmony, faithfulness, confidence, distance, infinity, the imagination, cold, and sometimes with sadness.[5] In U.S. and European public opinion polls it is overwhelmingly the most popular colour, chosen by almost half of both men and women as their favourite colour.[5]

Contents

    1 Shades and variations
    2 Etymology and linguistic differences
    3 History
        3.1 In the ancient world
        3.2 In the Byzantine Empire and the Islamic World
        3.3 During the Middle Ages
        3.4 In the European Renaissance
        3.5 Blue and white porcelain
        3.6 The war of the blues – indigo versus woad
        3.7 The blue uniform
        3.8 The search for the perfect blue
        3.9 The Impressionist painters
        3.10 The blue suit
        3.11 In the 20th and 21st century
    4 In science and industry
        4.1 Pigments and dyes
        4.2 Optics
        4.3 Scientific natural standards
        4.4 Why the sky and sea appear blue
        4.5 Atmospheric perspective
        4.6 Blue eyes
        4.7 Lasers
    5 In nature
        5.1 Animals
    6 In world culture
        6.1 As a national and international colour
        6.2 Politics
        6.3 Religion
        6.4 Gender
        6.5 Music
        6.6 Associations and sayings
    7 Sports
        7.1 The blues of antiquity
        7.2 Association football
        7.3 North American sporting leagues
    8 See also
    9 References
        9.1 Notes and citations
        9.2 Bibliography
    10 External links

Shades and variations
Main article: Shades of blue
Blue is between violet and green in the spectrum of visible light

Blue is the colour of light between violet and green on the visible spectrum. Hues of blue include indigo and ultramarine, closer to violet; pure blue, without any mixture of other colours; Cyan, which is midway on the spectrum between blue and green, and the other blue-greens turquoise, teal, and aquamarine.

Blues also vary in shade or tint; darker shades of blue contain black or grey, while lighter tints contain white. Darker shades of blue include ultramarine, cobalt blue, navy blue, and Prussian blue; while lighter tints include sky blue, azure, and Egyptian blue. (For a more complete list see the List of colours).

Blue pigments were originally made from minerals such as lapis lazuli, cobalt and azurite, and blue dyes were made from plants; usually woad in Europe, and Indigofera tinctoria, or True indigo, in Asia and Africa. Today most blue pigments and dyes are made by a chemical process.

    Earth is sometimes called the blue planet. A photomontage of the Earth seen from space (NASA image).

    Blue is the colour of the deep sea and the clear sky. The harbour of Toulon, France, on the Mediterranean Sea.

    Pure blue, also known as high blue, is not mixed with any other colours.

    Navy blue, also known as low blue, is the darkest shade of pure blue.

    Sky blue or pale azure, mid-way on the RBG colour wheel between blue and cyan.

    Extract of natural Indigo, the most popular blue dye before the invention of synthetic dyes. It was the colour of the first blue jeans.

    A block of lapis lazuli, originally used to make ultramarine.

    Ultramarine, the most expensive blue during the Renaissance, is a slightly violet-blue.

    Cobalt has been used since 2000 BC to colour cobalt glass, Chinese porcelain, and the stained glass windows of medieval cathedrals.

    The synthetic pigment cobalt blue was invented in 1802, and was popular with Vincent van Gogh and other impressionist painters.

    Cyan is made by mixing equal amounts of blue and green light, or removing red from white light.

    The colour teal takes its name from the colour around the eyes of the common teal duck.

    Egyptian blue goblet from Mesopotamia, 1500–1300 BC. This was the first synthetic blue, first made in about 2500 BC.

    Prussian blue, invented in 1707, was the first modern synthetic blue.

    Cerulean blue pigment was invented in 1805 and first marketed in 1860. It was frequently used for painting skies.

Etymology and linguistic differences

The modern English word blue comes from Middle English bleu or blewe, from the Old French bleu, a word of Germanic origin, related to the Old High German word blao.[6] In heraldry, the word azure is used for blue.[7]

In Russian and some other languages, there is no single word for blue, but rather different words for light blue (голубой, goluboy) and dark blue (синий, siniy).

Several languages, including Japanese, Thai, Korean, and Lakota Sioux, use the same word to describe blue and green. For example, in Vietnamese the colour of both tree leaves and the sky is xanh. In Japanese, the word for blue (青 ao) is often used for colours that English speakers would refer to as green, such as the colour of a traffic signal meaning "go". (For more on this subject, see Distinguishing blue from green in language)
History
In the ancient world

Blue was a latecomer among colours used in art and decoration, as well as language and literature.[8] Reds, blacks, browns, and ochres are found in cave paintings from the Upper Paleolithic period, but not blue. Blue was also not used for dyeing fabric until long after red, ochre, pink and purple. This is probably due to the perennial difficulty of making good blue dyes and pigments.[9] The earliest known blue dyes were made from plants – woad in Europe, indigo in Asia and Africa, while blue pigments were made from minerals, usually either lapis lazuli or azurite.

Lapis lazuli, a semi-precious stone, has been mined in Afghanistan for more than three thousand years, and was exported to all parts of the ancient world.[10] In Iran and Mesopotamia, it was used to make jewellery and vessels. In Egypt, it was used for the eyebrows on the funeral mask of King Tutankhamun (1341–1323 BC).[11]

The cost of importing lapis lazuli by caravan across the desert from Afghanistan to Egypt was extremely high. Beginning in about 2500 BC, the ancient Egyptians began to produce their own blue pigment known as Egyptian blue, made by grinding silica, lime, copper and alkalai, and heating it to 800 or 900 degrees C. This is considered the first synthetic pigment.[12] Egyptian blue was used to paint wood, papyrus and canvas, and was used to colour a glaze to make faience beads, inlays, and pots. It was particularly used in funeral statuary and figurines and in tomb paintings. Blue was considered a beneficial colour which would protect the dead against evil in the afterlife. Blue dye was also used to colour the cloth in which mummies were wrapped.[13]

In Egypt, blue was associated with the sky and with divinity. The Egyptian god Amun could make his skin blue so that he could fly, invisible, across the sky. Blue could also protect against evil; many people around the Mediterranean still wear a blue amulet, representing the eye of God, to protect them from misfortune.[14]

Blue glass was manufactured in Mesopotamia and Egypt as early as 2500 BC, using the same copper ingredients as Egyptian blue pigment. They also added cobalt, which produced a deeper blue, the same blue produced in the Middle Ages in the stained glass windows of the cathedrals of Saint-Denis and Chartres.[15]

The Ishtar Gate of ancient Babylon (604–562 BC) was decorated with deep blue glazed bricks used as a background for pictures of lions, dragons and aurochs.[16]

The ancient Greeks classified colours by whether they were light or dark, rather than by their hue. The Greek word for dark blue, kyaneos, could also mean dark green, violet, black or brown. The ancient Greek word for a light blue, glaukos, also could mean light green, grey, or yellow.[17]

The Greeks imported indigo dye from India, calling it indikon. They used Egyptian blue in the wall paintings of Knossos, in Crete, (2100 BC). It was not one of the four primary colours for Greek painting described by Pliny the Elder (red, yellow, black and white), but nonetheless it was used as a background colour behind the friezes on Greek temples and to colour the beards of Greek statues.[18]

The Romans also imported indigo dye, but blue was the colour of working class clothing; the nobles and rich wore white, black, red or violet. Blue was considered the colour of mourning. It was also considered the colour of barbarians; Julius Caesar reported that the Celts and Germans dyed their faces blue to frighten their enemies, and tinted their hair blue when they grew old.[19]

Nonetheless, the Romans made extensive use of blue for decoration. According to Vitruvius, they made dark blue pigment from indigo, and imported Egyptian blue pigment. The walls of Roman villas in Pompeii had frescoes of brilliant blue skies, and blue pigments were found in the shops of colour merchants.[18] The Romans had many different words for varieties of blue, including caeruleus, caesius, glaucus, cyaneus, lividus, venetus, aerius, and ferreus, but two words, both of foreign origin, became the most enduring; blavus, from the Germanic word blau, which eventually became bleu or blue; and azureus, from the Arabic word lazaward, which became azure.[20]

    Lapis lazuli pendant from Mesopotamia (Circa 2900 BC).

    A lapis azuli bowl from Iran (End of 3rd, beginning 2nd millennium BC)

    A hippo decorated with aquatic plants, made of faience with a blue glaze, made to resemble lapis lazuli. (2033–1710 BC)

    Egyptian blue colour in a tomb painting (Around 1500 BC)

    Egyptian faience bowl (Between 1550 and 1450 BC)

    a decorated cobalt glass vessel from Ancient Egypt (1450–1350 BC)

    The blue eyebrows in the gold funeral mask of King Tutankhamun are made of lapis lazuli. Other blues in the mask are made of turquoise, glass and faience.

    Figure of a servant from the tomb of King Seth I (1244–1279 BC). The figure is made of faience with a blue glaze, designed to resemble turquoise.

    A lion against a blue background from the Ishtar Gate of ancient Babylon. (575 BC)

    A Roman wall painting of Venus and her son Eros, from Pompeii (about 30 BC)

    Mural in the bedroom of the villa of Fannius Synestor in Boscoreale, (50-40 BC) in the Metropolitan Museum.

    A painted pottery pot coloured with Han blue from the Han Dynasty in China (206 BC to 220 AD).

    A tomb painting from the eastern Han Dynasty (25–220 AD) in Henan Province, China.

In the Byzantine Empire and the Islamic World

Dark blue was widely used in the decoration of churches in the Byzantine Empire. In Byzantine art Christ and the Virgin Mary usually wore dark blue or purple. Blue was used as a background colour representing the sky in the magnificent mosaics which decorated Byzantine churches.[21]

In the Islamic world, blue was of secondary importance to green, believed to be the favourite colour of the Prophet Mohammed. At certain times in Moorish Spain and other parts of the Islamic world, blue was the colour worn by Christians and Jews, because only Muslims were allowed to wear white and green.[22] Dark blue and turquoise decorative tiles were widely used to decorate the facades and interiors of mosques and palaces from Spain to Central Asia. Lapis lazuli pigment was also used to create the rich blues in Persian miniatures.


    Blue Byzantine mosaic ceiling representing the night sky in the Mausoleum of Galla Placidia in Ravenna, Italy (5th century).

    Blue mosaic in the cloak of Christ in the Hagia Sophia church in Istanbul (13th century).

    Glazed stone-paste bowl from Persia (12th century).

    Decorated page of a Koran from Persia (1373 AD)

    Blue tile on the facade of the Friday Mosque in Herat, Afghanistan (15th century).

    Persian miniature from the 16th century.

    Decoration in the Murat III hall of the Topkapi Palace in Istanbul (16th century).

    Flower-pattern tile from Iznik, Turkey, from second half of 16th century.

    Gazelle against a blue sky in the Alhambra Palace, Spain (14th century)

During the Middle Ages

In the art and life of Europe during the early Middle Ages, blue played a minor role. The nobility wore red or purple, while only the poor wore blue clothing, coloured with poor-quality dyes made from the woad plant. Blue played no part in the rich costumes of the clergy or the architecture or decoration of churches. This changed dramatically between 1130 and 1140 in Paris, when the Abbe Suger rebuilt the Saint Denis Basilica. He installed stained glass windows coloured with cobalt, which, combined with the light from the red glass, filled the church with a bluish violet light. The church became the marvel of the Christian world, and the colour became known as the "bleu de Saint-Denis". In the years that followed even more elegant blue stained glass windows were installed in other churches, including at Chartres Cathedral and Sainte-Chapelle in Paris.[23]

Another important factor in the increased prestige of the colour blue in the 12th century was the veneration of the Virgin Mary, and a change in the colours used to depict her clothing. In earlier centuries her robes had usually been painted in sombre black, grey, violet, dark green or dark blue. In the 12th century they began to be painted a rich lighter blue, usually made with a new pigment imported from Asia; ultramarine. Blue became associated with holiness, humility and virtue.

Ultramarine was made from lapis lazuli, from the mines of Badakshan, in the mountains of Afghanistan, near the source of the Oxus River. The mines were visited by Marco Polo in about 1271; he reported, "here is found a high mountain from which they extract the finest and most beautiful of blues." Ground lapis was used in Byzantine manuscripts as early as the 6th century, but it was impure and varied greatly in colour. Ultramarine refined out the impurities through a long and difficult process, creating a rich and deep blue. It was called bleu outremer in French and blu otramere in Italian, since it came from the other side of the sea. It cost far more than any other colour, and it became the luxury colour for the Kings and Princes of Europe.[24]

King Louis IX of France, better known as Saint Louis (1214–1270), became the first King of France to regularly dress in blue. This was copied by other nobles. Paintings of the mythical King Arthur began to show him dressed in blue. The coat of arms of the Kings of France became an azure or light blue shield, sprinkled with golden fleur-de-lis or lilies. Blue had come from obscurity to become the royal colour.[25]

Once blue became the colour of the King, it also became the colour of the wealthy and powerful in Europe. In the Middle Ages in France and to some extent in Italy, the dyeing of blue cloth was subject to license from the crown or state. In Italy, the dyeing of blue was assigned to a specific guild, the tintori di guado, and could not be done by anyone else without severe penalty. The wearing of blue implied some dignity and some wealth.[26]

Besides ultramarine, several other blues were widely used in the Middle Ages and later in the Renaissance. Azurite, a form of copper carbonate, was often used as a substitute for ultramarine. The Romans used it under the name lapis armenius, or Armenian stone. The British called it azure of Amayne, or German azure. The Germans themselves called it bergblau, or mountain stone. It was mined in France, Hungary, Spain and Germany, and it made a pale blue with a hint of green, which was ideal for painting skies. It was a favourite background colour of the German painter Albrecht Dürer.[27]

Another blue often used in the Middle Ages was called tournesol or folium. It was made from the plant Crozophora tinctoria, which grew in the south of France. It made a fine transparent blue valued in medieval manuscripts.[28]

Another common blue pigment was smalt, which was made by grinding blue cobalt glass into a fine powder. It made a deep violet blue similar to ultramarine, and was vivid in frescoes, but it lost some of its brilliance in oil paintings. It became especially popular in the 17th century, when ultramarine was difficult to obtain. It was employed at times by Titian, Tintoretto, Veronese, El Greco, Van Dyck, Rubens and Rembrandt.[29]

    Stained glass windows of the Basilica of Saint Denis (1141–1144).

    Notre Dame de la Belle Verrière window, Chartres Cathedral. (1180–1225).

    Detail of the windows at Sainte-Chapelle (1250).

    The Maesta by Duccio (1308) showed the Virgin Mary in a robe painted with ultramarine. Blue became the colour of holiness, virtue and humility.

    In the 12th century blue became part of the royal coat of arms of France.

    The Wilton Diptych, made for King Richard II of England, made lavish use of ultramarine. (About 1400)

    The Coronation of King Louis VIII of France in 1223 showed that blue had become the royal colour. (painted in 1450).

In the European Renaissance

In the Renaissance, a revolution occurred in painting; artists began to paint the world as it was actually seen, with perspective, depth, shadows, and light from a single source. Artists had to adapt their use of blue to the new rules. In medieval paintings, blue was used to attract the attention of the viewer to the Virgin Mary, and identify her. In Renaissance paintings, artists tried to create harmonies between blue and red, lightening the blue with lead white paint and adding shadows and highlights. Raphael was a master of this technique, carefully balancing the reds and the blues so no one colour dominated the picture.[30]

Utramarine was the most prestigious blue of the Renaissance, and patrons sometimes specified that it be used in paintings they commissioned. The contract for the Madone des Harpies by Andrea del Sarto (1514) required that the robe of the Virgin Mary be coloured with ultramarine costing "at least five good florins an ounce."[31] Good ultramarine was more expensive than gold; in 1508 the German painter Albrecht Dürer reported in a letter that he had paid twelve ducats- the equivalent of forty-one grams of gold - for just thirty grams of ultramarine.[32]

Often painters or clients saved money by using less expensive blues, such as azurite smalt, or pigments made with indigo, but this sometimes caused problems. Pigments made from azurite were less expensive, but tended to turn dark and green with time. An example is the robe of the Virgin Mary in The Madonna Enthroned with Saints by Raphael in the Metropolitan Museum in New York. The Virgin Mary's azurite blue robe has degraded into a greenish-black.[33]

The introduction of oil painting changed the way colours looked and how they were used. Ultramarine pigment, for instance, was much darker when used in oil painting than when used in tempera painting, in frescoes. To balance their colours, Renaissance artists like Raphael added white to lighten the ultramarine. The sombre dark blue robe of the Virgin Mary became a brilliant sky blue.[34] Titian created his rich blues by using many thin glazes of paint of different blues and violets which allowed the light to pass through, which made a complex and luminous colour, like stained glass. He also used layers of finely ground or coarsely ground ultramarine, which gave subtle variations to the blue.[35]

    Giotto was one of the first Italian Renaissance painters to use ultramarine, here in the murals of the Arena Chapel in Padua (circa 1305).

    Throughout the 14th and 15th centuries, the robes of the Virgin Mary were painted with ultramarine. This is The Virgin of Humility by Fra Angelico (about 1430). Blue fills the picture.

    In In the Virgin of the Meadow (1506), Raphael used white to soften the ultramarine blue of Virgin Mary's robes to balance the red and blue, and to harmonize with the rest of the picture.

    Giovanni Bellini was the master of the rich and luminous blue, which almost seemed to glow. This Madonna is from 1480.

    Titian used an ultramarine sky and robes to give depth and brilliance to Bacchus and Ariadne (1520–1523)

    In this painting of The Madonna and Child Enthroned with Saints an early work by Raphael in the Metropolitan Museum of Art, the blue cloak of the Virgin Mary has turned a green-black. It was painted with less-expensive azurite.

    Glazed Terracotta of The Virgin Adoring the Christ Child, from the workshop of Andrea della Robbia (1483)

    The Très Riches Heures du Duc de Berry was the most important illuminated manuscript of the 15th century. The blue was the extravagantly expensive ultramarine, whose fine grains gave it its brilliant colour. It shows the Duc Du Berry himself seated at the lower right. His costume shows that blue had become a colour for the dress of the nobility, not just of peasants.

    Johannes Vermeer used natural ultramarine in his paintings. The expense was probably borne by his wealthy patron Pieter van Ruijven.[36]

Blue and white porcelain

In about the 9th century, Chinese artisans abandoned the Han blue colour they had used for centuries, and began to use cobalt blue, made with cobalt salts of alumina, to manufacture fine blue and white porcelain, The plates and vases were shaped, dried, the paint applied with a brush, covered with a clear glaze, then fired at a high temperature. Beginning in the 14th century, this type of porcelain was exported in large quantity to Europe where it inspired a whole style of art, called Chinoiserie. European courts tried for many years to imitate Chinese blue and white porcelain, but only succeeded in the 18th century after a missionary brought the secret back from China.

Other famous white and blue patterns appeared in Delft, Meissen, Staffordshire, and Saint Petersburg, Russia.

    Chinese blue and white porcelain from about 1335, made in Jingdezhen, the porcelain centre of China. Exported to Europe, this porcelain launched the style of Chinoiserie.

    A soft-paste porcelain vase made in Rouen, France, at the end of the 17th century, imitating Chinese blue and white.

    Eighteenth century blue and white pottery from Delft, in the Netherlands.

    Russian porcelain of the cobalt net pattern, made with cobalt blue pigment. The Imperial Porcelain Factory in Saint Petersburg was founded in 1744. This pattern, first produced in 1949, was copied after a design made for Catherine the Great.

The war of the blues – indigo versus woad

While blue was an expensive and prestigious colour in European painting, it became a common colour for clothing during the Renaissance. The rise of the colour blue in fashion in the 12th and 13th centuries led to the creation of a thriving blue dye industry in several European cities, notably Amiens, Toulouse and Erfurt. They made a dye called pastel from woad, a plant common in Europe, which had been used to make blue dye by the Celts and German tribes. Blue became a colour worn by domestics and artisans, not just nobles. In 1570, when Pope Pius V listed the colours that could be used for ecclesiastical dress and for altar decoration, he excluded blue, because he considered it too common.[37]

The process of making blue with woad was particularly long and noxious- it involved soaking the leaves of the plant for from three days to a week in human urine, ideally urine from men who had been drinking a great deal of alcohol, which was said to improve the colour. The fabric was then soaked for a day in the urine, then put out in the sun, where as it dried it turned blue.[37]

The pastel industry was threatened in the 15th century by the arrival from India of new blue dye, indigo, made from a shrub widely grown in Asia. Indigo blue had the same chemical composition as woad, but it was more concentrated and produced a richer and more stable blue. In 1498, Vasco de Gama opened a trade route to import indigo from India to Europe. In India, the indigo leaves were soaked in water, fermented, pressed into cakes, dried into bricks, then carried to the ports London, Marseille, Genoa and Bruges. Later, in the 17th century, the British, Spanish and Dutch established indigo plantations in Jamaica, South Carolina, the Virgin Islands and South America, and began to import American indigo to Europe.

The countries with large and prosperous pastel industries tried to block the use of indigo. The German government outlawed the use of indigo in 1577, describing it as a "pernicious, deceitful and corrosive substance, the Devil's dye."[38][39] In France, Henry IV, in an edict of 1609, forbade under pain of death the use of "the false and pernicious Indian drug".[40] It was forbidden in England until 1611, when British traders established their own indigo industry in India and began to import it into Europe.[41]

The efforts to block indigo were in vain; the quality of indigo blue was too high and the price too low for pastel made from woad to compete. In 1737 both the French and German governments finally allowed the use of indigo. This ruined the dye industries in Toulouse and the other cities that produced pastel, but created a thriving new indigo commerce to seaports such as Bordeaux, Nantes and Marseille.[42]

Another war of the blues took place at the end of the 19th century, between indigo and the new synthetic indigo, first discovered in 1868 by the German chemist Johann Friedrich Wilhelm Adolf von Baeyer. The German chemical firm BASF put the new dye on the market in 1897, in direct competition with the British-run indigo industry in India, which produced most of the world's indigo. In 1897 Britain sold ten thousand tons of natural indigo on the world market, while BASF sold six hundred tons of synthetic indigo. The British industry cut prices and reduced the salaries of its workers, but it was unable to compete; the synthetic indigo was more pure, made a more lasting blue, and was not dependent upon good or bad harvests. In 1911, India sold only 660 tons of natural indigo, while BASF sold 22,000 tons of synthetic indigo.

Not long after the battle between natural and synthetic indigo, chemists discovered a new synthetic blue dye, called indanthrene, which made a blue which did not fade. By the 1950s almost all fabrics, including blue jeans, were dyed with the new synthetic dye. In 1970, BASF stopped making synthetic indigo, and switched to newer synthetic blues.[41]

    Isatis tinctoria, or woad, was the main source of blue dye in Europe from ancient times until the arrival of indigo from Asia and America. It was processed into a paste called pastel.

    A Dutch tapestry from 1495 to 1505. The blue colour comes from woad.

    Indigofera tinctoria, a tropical shrub, is the main source of indigo dye. The chemical composition of indigo dye is the same as that of woad, but the colour is more intense.

    Cakes of indigo. The leaf has been soaked in water, fermented, mixed with lye or another base, then pressed into cakes and dried, ready for export.

    A woad mill in Thuringia, in Germany, in 1752. The woad industry was already on its way to extinction, unable to compete with indigo blue.

The blue uniform

In the 17th century, Frederick William, Elector of Brandenburg, was one of the first rulers to give his army blue uniforms. The reasons were economic; the German states were trying to protect their pastel dye industry against competition from imported indigo dye. When Brandenburg became the Kingdom of Prussia in 1701, the uniform colour was adopted by the Prussian army. Most German soldiers wore dark blue uniforms until the First World War, with the exception of the Bavarians, who wore light blue.[43]

Thanks in part to the availability of indigo dye, the 18th century saw the widespread use of blue military uniforms. Prior to 1748, British naval officers simply wore upper-class civilian clothing and wigs. In 1748, the British uniform for naval officers was officially established as an embroidered coat of the colour then called marine blue, now known as navy blue.[44] When the Continental Navy of the United States was created in 1775, it largely copied the British uniform and colour.

In the late 18th century, the blue uniform became a symbol of liberty and revolution. In October 1774, even before the United States declared its independence, George Mason and one hundred Virginia neighbours of George Washington organised a voluntary militia unit (the Fairfax County Independent Company of Volunteers) and elected Washington the honorary commander. For their uniforms they chose blue and buff, the colours of the Whig Party, the opposition party in England, whose policies were supported by George Washington and many other patriots in the American colonies.[45][46]

When the Continental Army was established in 1775 at the outbreak of the American Revolution, the first Continental Congress declared that the official uniform colour would be brown, but this was not popular with many militias, whose officers were already wearing blue. In 1778 the Congress asked George Washington to design a new uniform, and in 1779 Washington made the official colour of all uniforms blue and buff. Blue continued to be the colour of the field uniform of the U.S. Army until 1902, and is still the colour of the dress uniform.[47]

In France, the Gardes Françaises, the elite regiment which protected Louis XVI, wore dark blue uniforms with red trim. In 1789, the soldiers gradually changed their allegiance from the King to the people, and they played a leading role in the storming of the Bastille. After the fall of Bastille, a new armed force, the Garde Nationale, was formed under the command of the Marquis de Lafayette, who had served with George Washington in America. Lafayette gave the Garde Nationale dark blue uniforms similar to those of the Continental Army. Blue became the colour of the Revolutionary armies, opposed to the white uniforms of the Royalists and the Austrians.[48]

Napoleon Bonaparte abandoned many of the doctrines of the French Revolution but he kept blue as the uniform colour for his army, although he had great difficulty obtaining the blue dye, since the British controlled the seas and blocked the importation of indigo to France. Napoleon was forced to dye uniforms with woad, which had an inferior blue colour.[49] The French army wore a dark blue uniform coat with red trousers until 1915, when it was found to be a too visible target on the battlefields of World War I. It was replaced with uniforms of a light blue-grey colour called horizon blue.

Blue was the colour of liberty and revolution in the 18th century, but in the 19th it increasingly became the colour of government authority, the uniform colour of policemen and other public servants. It was considered serious and authoritative, without being menacing. In 1829, when Robert Peel created the first London Metropolitan Police, he made the colour of the uniform jacket a dark, almost black blue, to make the policemen look different from soldiers, who until then had patrolled the streets. The traditional blue jacket with silver buttons of the London "bobbie" was not abandoned until the mid-1990s, when it was replaced by a light blue shirt and a jumper or sweater of the colour officially known as NATO blue.[50]

The New York City Police Department, modelled after the London Metropolitan Police, was created in 1844, and in 1853, they were officially given a navy blue uniform, the colour they wear today.[51]

    Elector Frederic William of Brandenburg gave his soldiers blue uniforms (engraving from 1698). When Brandenburg became the Kingdom of Prussia in 1701, blue became the uniform colour of the Prussian Army.

    Uniform of a lieutenant in the Royal Navy (1777). Marine blue became the official colour of the Royal Navy uniform coat in 1748.

    George Washington chose blue and buff as the colours of the Continental Army uniform. They were the colours of the English Whig Party, which Washington admired.

    The Marquis de Lafayette in the uniform of the Garde Nationale during the French Revolution (1790).

    The cadets of the Ecole Spéciale Militaire de Saint-Cyr, the French military academy, still wear the blue and red uniform of the French army before 1915.

    In 1853, New York policemen and firemen were officially outfitted in navy blue uniforms.

    Metropolitan Police officers in Soho, London (2007).

    New York City police officers on Times Square (2010).

    Chicago policeman in blue on a Segway PT (2005)

The search for the perfect blue

During the 17th and 18th centuries, chemists in Europe tried to discover a way to create synthetic blue pigments, avoiding the expense of importing and grinding lapis lazuli, azurite and other minerals. The Egyptians had created a synthetic colour, Egyptian blue, three thousand years BC, but the formula had been lost. The Chinese had also created synthetic pigments, but the formula was not known in the west.

In 1709, a German druggist and pigment maker named Diesbach accidentally discovered a new blue while experimenting with potassium and iron sulphides. The new colour was first called Berlin blue, but later became known as Prussian blue. By 1710 it was being used by the French painter Antoine Watteau, and later his successor Nicolas Lancret. It became immensely popular for the manufacture of wallpaper, and in the 19th century was widely used by French impressionist painters.[52]

Beginning in 1820s, Prussian blue was imported into Japan through the port of Nagasaki. It was called bero-ai, or Berlin Blue, and it became popular because it did not fade like traditional Japanese blue pigment, ai-gami, made from the dayflower. Prussian blue was used by both Hokusai, in his famous wave paintings, and Hiroshige.[53]

In 1824, the Societé pour l'Encouragement d'Industrie in France offered a prize for the invention of an artificial ultramarine which could rival the natural colour made from lapis lazuli. The prize was won in 1826 by a chemist named Jean Baptiste Guimet, but he refused to reveal the formula of his colour. In 1828, another scientist, Christian Gmelin then a professor of chemistry in Tübingen, found the process and published his formula. This was the beginning of new industry to manufacture artificial ultramarine, which eventually almost completely replaced the natural product.[54]

In 1878, a German chemist named a. Von Baeyer discovered a synthetic substitute for indigotine, the active ingredient of indigo. This product gradually replaced natural indigo, and after the end of the First World War, it brought an end to the trade of indigo from the East and West Indies.

In 1901, a new synthetic blue dye, called Indanthrone blue, was invented, which had even greater resistance to fading during washing or in the sun. This dye gradually replaced artificial indigo, whose production ceased in about 1970. Today almost all blue clothing is dyed with an indanthrone blue.[55]

    The 19th-century Japanese woodblock artist Hokusai used Prussian blue, a synthetic colour imported from Europe, in his wave paintings.

    A synthetic indigo dye factory in Germany in 1890. The manufacture of this dye ended the trade in indigo from America and India that had begun in the 15th century.

The Impressionist painters

The invention of new synthetic pigments in the 18th and 19th centuries considerably brightened and expanded the palette of painters. J.M.W. Turner experimented with the new cobalt blue, and of the twenty colours most used by the Impressionists, twelve were new and synthetic colours, including cobalt blue, ultramarine and cerulean blue.[56]

Another important influence on painting in the 19th century was the theory of complementary colours, developed by the French chemist Michel Eugene Chevreul in 1828 and published in 1839. He demonstrated that placing complementary colours, such as blue and yellow-orange or ultramarine and yellow, next to each other heightened the intensity of each colour "to the apogee of their tonality."[57] In 1879 an American physicist, Ogden Rood, published a book charting the complementary colours of each colour in the spectrum.[58] This principle of painting was used by Claude Monet in his Impression – Sunrise – Fog (1872), where he put a vivid blue next to a bright orange sun, (1872) and in Régate à Argenteuil (1872), where he painted an orange sun against blue water. The colours brighten each other. Renoir used the same contrast of cobalt blue water and an orange sun in Canotage sur la Seine (1879–1880). Both Monet and Renoir liked to use pure colours, without any blending.[56]

Monet and the impressionists were among the first to observe that shadows were full of colour. In his La Gare Saint-Lazare, the grey smoke, vapour and dark shadows are actually composed of mixtures of bright pigment, including cobalt blue, cerulean blue, synthetic ultramarine, emerald green, Guillet green, chrome yellow, vermilion and ecarlate red.[59] Blue was a favourite colour of the impressionist painters, who used it not just to depict nature but to create moods, feelings and atmospheres. Cobalt blue, a pigment of cobalt oxide-aluminium oxide, was a favourite of Auguste Renoir and Vincent van Gogh. It was similar to smalt, a pigment used for centuries to make blue glass, but it was much improved by the French chemist Louis Jacques Thénard, who introduced it in 1802. It was very stable but extremely expensive. Van Gogh wrote to his brother Theo, "'Cobalt is a divine colour and there is nothing so beautiful for putting atmosphere around things ..."[60]

Van Gogh described to his brother Theo how he composed a sky: "The dark blue sky is spotted with clouds of an even darker blue than the fundamental blue of intense cobalt, and others of a lighter blue, like the bluish white of the Milky Way ... the sea was very dark ultramarine, the shore a sort of violet and of light red as I see it, and on the dunes, a few bushes of prussian blue."[61]

    Claude Monet used several recently invented colours in his Gare Saint-Lazare (1877). He used cobalt blue, invented in 1807, cerulean blue invented in 1860, and French ultramarine, first made in 1828.

    In Régate à Argenteuil (1872), Monet used two complementary colours together — blue and orange — to brighten the effect of both colours.

    Umbrellas, by Pierre Auguste-Renoir. (1881 and 1885). Renoir used cobalt blue for right side of the picture, but used the new synthetic ultramarine introduced in the 1870s, when he added two figures to left of the picture a few years later.

    In Vincent van Gogh's Irises, the blue irises are placed against their complementary colour, yellow-orange.

    Van Gogh's Starry Night Over the Rhone (1888). Blue used to create a mood or atmosphere. A cobalt blue sky, and cobalt or ultramarine water.

    Wheatfield Under Thunderclouds (July 1890), One of the last paintings by Vincent van Gogh, He wrote of cobalt blue, "there is nothing so beautiful for putting atmosphere around things."

The blue suit

Blue had first become the high fashion colour of the wealthy and powerful in Europe in the 13th century, when it was worn by Louis IX of France, better known as Saint Louis (1214-1270). Wearing blue implied dignity and wealth, and blue clothing was restricted to the nobility.[62] However, blue was replaced by black as the power colour in the 14th century, when European princes, and then merchants and bankers, wanted to show their seriousness, dignity and devoutness (see Black).

Blue gradually returned to court fashion in the 17th century, as part of a palette of peacock-bright colours shown off in extremely elaborate costumes. The modern blue business suit has its roots in England in the middle of the 17th century. Following the London plague of 1665 and the London fire of 1666, King Charles II of England ordered that his courtiers wear simple coats, waistcoats and breeches, and the palette of colours became blue, grey, white and buff. Widely imitated, this style of men's fashion became almost a uniform of the London merchant class and the English country gentleman.[63]

During the American Revolution, the leader of the Whig Party in England, Charles James Fox, wore a blue coat and buff waistcoat and breeches, the colours of the Whig Party and of the uniform of George Washington, whose principles he supported. The men's suit followed the basic form of the military uniforms of the time, particularly the uniforms of the cavalry.[63]

In the early 19th century, during the Regency of the future King George IV, the blue suit was revolutionized by a courtier named George Beau Brummel. Brummel created a suit that closely fitted the human form. The new style had a long tail coat cut to fit the body and long tight trousers to replace the knee-length breeches and stockings of the previous century. He used plain colours, such as blue and grey, to concentrate attention on the form of the body, not the clothes. Brummel observed, "If people turn to look at you in the street, you are not well dressed."[64] This fashion was adopted by the Prince Regent, then by London society and the upper classes. Originally the coat and trousers were different colours, but in the 19th century the suit of a single colour became fashionable. By the late 19th century the black suit had become the uniform of businessmen in England and America. In the 20th century, the black suit was largely replaced by the dark blue or grey suit.[63]

    King Louis IX of France (on the right, with Pope Innocent) was the first European king to wear blue. It quickly became the colour of the nobles and wealthy.

    Joseph Leeson, later 1st Earl of Milltown, in the typical dress of the English country gentleman in the 1730s.

    Charles James Fox, a leader of the Whig Party in England, wore a blue suit in Parliament in support of George Washington and the American Revolution. Portrait by Joshua Reynolds (1782).

    Beau Brummel introduced the ancestor of the modern blue suit, shaped to the body, with a coat, long trousers, waistcoat, white shirt and elaborate cravat (1805).

    Man's suit, 1826. Dark blue suits were still rare; this one is blue-green or teal.

    Man's blue suit in the 1870s, Paris. Painting by Caillebotte. In the second half of the 19th century the monochrome suit had become the fashion, but most suits were black.

    President John Kennedy popularised the blue two-button business suit, less formal than the suits of his predecessors. (1961)

    In the 21st century, the dark blue business suit is the most common style worn by world leaders, seen here at the 2011 G-20 Summit in Cannes, France.

In the 20th and 21st century

At the beginning of the 20th century, many artists recognised the emotional power of blue, and made it the central element of paintings. During his Blue Period (1901–1904) Pablo Picasso used blue and green, with hardly any warm colours, to create a melancholy mood. In Russia, the symbolist painter Pavel Kuznetsov and the Blue Rose art group (1906–1908) used blue to create a fantastic and exotic atmosphere. In Germany, Wassily Kandinsky and other Russian émigrés formed the art group called Der Blaue Reiter (The Blue Rider), and used blue to symbolise spirituality and eternity.[65] Henri Matisse used intense blues to express the emotions he wanted viewers to feel. Matisse wrote, "A certain blue penetrates your soul."[66]

In the art of the second half of the 20th century, painters of the abstract expressionist movement began to use blue and other colours in pure form, without any attempt to represent anything, to inspire ideas and emotions. Painter Mark Rothko observed that colour was "only an instrument;" his interest was "in expressing human emotions tragedy, ecstasy, doom, and so on."[67]

In fashion, blue, particularly dark blue, was seen as a colour which was serious but not grim. In the mid-20th century, blue passed black as the most common colour of men's business suits, the costume usually worn by political and business leaders. Public opinion polls in the United States and Europe showed that blue was the favourite colour of over fifty per cent of respondents. Green was far behind with twenty per cent, while white and red received about eight per cent each.[68]

In 1873 a German immigrant in San Francisco, Levi Strauss, invented a sturdy kind of work trousers, made of denim fabric and coloured with indigo dye, called blue jeans. In 1935, they were raised to the level of high fashion by Vogue magazine. Beginning in the 1950s, they became an essential part of uniform of young people in the United States, Europe, and around the world.

Blue was also seen as a colour which was authoritative without being threatening. Following the Second World War, blue was adopted as the colour of important international organisations, including the United Nations, the Council of Europe, UNESCO, the European Union, and NATO. United Nations peacekeepers wear blue helmets to stress their peacekeeping role. Blue is used by the NATO Military Symbols for Land Based Systems to denote friendly forces, hence the term "blue on blue" for friendly fire, and Blue Force Tracking for location of friendly units. The People's Liberation Army of China (formerly known as the "Red Army") uses the term "Blue Army" to refer to hostile forces during exercises.[69]

The 20th century saw the invention of new ways of creating blue, such as chemiluminescence, making blue light through a chemical reaction.

In the 20th century, it also became possible to own your own colour of blue. The French artist Yves Klein, with the help of a French paint dealer, created a specific blue called International Klein blue, which he patented. It was made of ultramarine combined with a resin called Rhodopa, which gave it a particularly brilliant colour. The baseball team the Los Angeles Dodgers developed its own blue, called Dodger blue, and several American universities invented new blues for their colours.

With the dawn of the World Wide Web, blue has become the standard colour for hyperlinks in graphic browsers (though in most browsers links turn purple if you visit their target), to make their presence within text obvious to readers.

    During his Blue Period, Pablo Picasso used blue as the colour of melancholy.

    The Russian avant-garde painter Pavel Kuznetsov and his group, the Blue Rose, used blue to symbolise fantasy and exoticism. This is In the Steppe- Mirage (1911).

    The Blue Rider (1903), by Wassily Kandinsky, For Kandinsky, blue was the colour of spirituality: the darker the blue, the more it awakened human desire for the eternal.[65]

    The Conversation (1908–1912) by Henri Matisse used blue to express the emotions he wanted the viewer to feel.

    Blue jeans, made of denim coloured with indigo dye, patented by Levi Strauss in 1873, became an essential part of the wardrobe of young people beginning in the 1950s.

    Blue is the colour of United Nations peacekeepers, known as Blue Helmets. These soldiers are patrolling the border between Ethiopia and Eritrea.

    Vivid blues can be created by chemical reactions, called chemiluminescence. This is luminol, a chemical used in crime scene investigations. Luminol glows blue when it contacts even a tiny trace of blood.

    Blue neon lighting, first used in commercial advertising, is now used in works of art. This is Zwei Pferde für Münster (Two horses for Münster), a neon sculpture by Stephan Huber (2002), in Munster, Germany.

In science and industry
Pigments and dyes

Blue pigments were made from minerals, especially lapis lazuli and azurite (Cu
3(CO
3)
2(OH)
2). These minerals were crushed, ground into powder, and then mixed with a quick-drying binding agent, such as egg yolk (tempera painting); or with a slow-drying oil, such as linseed oil, for oil painting. To make blue stained glass, cobalt blue (cobalt(II) aluminate: CoAl
2O
4)pigment was mixed with the glass. Other common blue pigments made from minerals are ultramarine (Na8-10Al
6Si
6O
24S2-4), cerulean blue (primarily cobalt (II) stanate: Co
2SnO
4), and Prussian blue (milori blue: primarily Fe
7(CN)
18).

Natural dyes to colour cloth and tapestries were made from plants. Woad and true indigo were used to produce indigo dye used to colour fabrics blue or indigo. Since the 18th century, natural blue dyes have largely been replaced by synthetic dyes.

    Lapis lazuli, mined in Afghanistan for more than three thousand years, was used for jewellery and ornaments, and later was crushed and powdered and used as a pigment. The more it was ground, the lighter the blue colour became.

    Azurite, common in Europe and Asia, is produced by the weathering of copper ore deposits. It was crushed and powdered and used as a pigment from ancient times,

    Natural ultramarine, made by grinding and purifying lapis lazuli, was the finest available blue pigment in the Middle Ages and the Renaissance. It was extremely expensive, and in Italian Renaissance art, it was often reserved the robes of the Virgin Mary.

    Egyptian blue, the first artificial pigment, created in the third millennium BC in Ancient Egypt by grinding sand, copper and natron, and then heating them. It was often used in tomb paintings and funereal objects to protect the dead in their afterlife.

    Ground azurite was often in Renaissance used as a substitute for the much more expensive lapis lazuli. It made a rich blue, but was unstable and could turn dark green over time.

    Cerulean was created with copper and cobalt oxide, and used to make a sky blue colour. Like azurite, it could fade or turn green.

    Cobalt blue. Cobalt has been used for centuries to colour glass and ceramics; it was used to make the deep blue stained glass windows of Gothic cathedrals and Chinese porcelain beginning in the T'ang Dynasty. In 1799 a French chemist, Louis Jacques Thénard, made a synthetic cobalt blue pigment which became immensely popular with painters.

    Prussian blue was one of the first synthetic colours, created in Berlin in about 1706 as a substitute for lapis lazuli. It is also the blue used in blueprints.

    Indigo dye is made from the woad, Indigofera tinctoria, a plant common in Asia and Africa but little known in Europe until the 15th century. Its importation into Europe revolutionized the colour of clothing. It also became the colour used in blue denim and jeans. Nearly all indigo dye produced today is synthetic.

    Synthetic ultramarine pigment, invented in 1826, has the same chemical composition as natural ultramarine. It is more vivid than natural ultramarine because the particles are smaller and more uniform in size, and thus distribute the light more evenly.

    A new synthetic blue created in the 1930s is phthalocyanine, an intense colour widely used for making blue ink, dye, and pigment.

Optics
sRGB rendering of the spectrum of visible light
Colour    Frequency    Wavelength
violet    668–789 THz    380–450 nm
blue    606–668 THz    450–495 nm
green    526–606 THz    495–570 nm
yellow    508–526 THz    570–590 nm
orange    484–508 THz    590–620 nm
red    400–484 THz    620–750 nm

Human eyes perceive blue when observing light which has a wavelength between 450-495 nanometres. Blues with a higher frequency and thus a shorter wavelength gradually look more violet, while those with a lower frequency and a longer wavelength gradually appear more green. Pure blue, in the middle, has a wavelength of 470 nanometres.

Isaac Newton included blue as one of the seven colours in his first description the visible spectrum, He chose seven colours because that was the number of notes in the musical scale, which he believed was related to the optical spectrum. He included indigo, the hue between blue and violet, as one of the separate colours, though today it is usually considered a hue of blue.[70]

In painting and traditional colour theory, blue is one of the three primary colours of pigments (red, yellow, blue), which can be mixed to form a wide gamut of colours. Red and blue mixed together form violet, blue and yellow together form green. Mixing all three primary colours together produces a dark grey. From the Renaissance onwards, painters used this system to create their colours. (See RYB colour system.)

The RYB model was used for colour printing by Jacob Christoph Le Blon as early as 1725. Later, printers discovered that more accurate colours could be created by using combinations of magenta, cyan, yellow and black ink, put onto separate inked plates and then overlaid one at a time onto paper. This method could produce almost all the colours in the spectrum with reasonable accuracy.

In the 19th century the Scottish physicist James Clerk Maxwell found a new way of explaining colours, by the wavelength of their light. He showed that white light could be created by combining red, blue and green light, and that virtually all colours could be made by different combinations of these three colours. His idea, called additive colour or the RGB colour model, is used today to create colours on televisions and computer screens. The screen is covered by tiny pixels, each with three fluorescent elements for creating red, green and blue light. If the red, blue and green elements all glow at once, the pixel looks white. As the screen is scanned from behind with electrons, each pixel creates its own designated colour, composing a complete picture on the screen.

    Additive colour mixing. The projection of primary colour lights on a screen shows secondary colours where two overlap; the combination red, green, and blue each in full intensity makes white.

    Blue and orange pixels on an LCD television screen. Closeup of the red, green and blue sub-pixels on left.

On the HSV colour wheel, the complement of blue is yellow; that is, a colour corresponding to an equal mixture of red and green light. On a colour wheel based on traditional colour theory (RYB) where blue was considered a primary colour, its complementary colour is considered to be orange (based on the Munsell colour wheel).[71]
Scientific natural standards

    Emission spectrum of Cu2+
    Electronic spectrum of aqua-ions Cu(H
    2O)2+
    6

Why the sky and sea appear blue

Of the colours in the visible spectrum of light, blue has a very short wavelength, while red has the longest wavelength. When sunlight passes through the atmosphere, the blue wavelengths are scattered more widely by the oxygen and nitrogen molecules, and more blue comes to our eyes. This effect is called Rayleigh scattering, after Lord Rayleigh, the British physicist who discovered it. It was confirmed by Albert Einstein in 1911.[72]

Near sunrise and sunset, most of the light we see comes in nearly tangent to the Earth's surface, so that the light's path through the atmosphere is so long that much of the blue and even green light is scattered out, leaving the sun rays and the clouds it illuminates red. Therefore, when looking at the sunset and sunrise, you will see the colour red more than any of the other colours.[73]

The sea is seen as blue for largely the same reason: the water absorbs the longer wavelengths of red and reflects and scatters the blue, which comes to the eye of the viewer. The colour of the sea is also affected by the colour of the sky, reflected by particles in the water; and by algae and plant life in the water, which can make it look green; or by sediment, which can make it look brown.[74]
Atmospheric perspective

The farther away an object is, the more blue it often appears to the eye. For example, mountains in the distance often appear blue. This is the effect of atmospheric perspective; the farther an object is away from the viewer, the less contrast there is between the object and its background colour, which is usually blue. In a painting where different parts of the composition are blue, green and red, the blue will appear to be more distant, and the red closer to the viewer. The cooler a colour is, the more distant it seems.[75]

    Blue light is scattered more than other wavelengths by the gases in the atmosphere, giving the Earth a blue halo when seen from space.

    An example of aerial, or atmospheric perspective. Objects become more blue and lighter in colour the farther they are from the viewer, because of Rayleigh scattering.

    Under the sea, red and other light with longer wavelengths is absorbed, so white objects appear blue. The deeper you go, the darker the blue becomes. In the open sea, only about one per cent of light penetrates to a depth of 200 metres. (See underwater and euphotic depth)

Blue eyes
Blue eyes actually contain no blue pigment. The colour is caused by an effect called Rayleigh scattering, which also makes the sky appear blue.

Blue eyes do not actually contain any blue pigment. Eye colour is determined by two factors: the pigmentation of the eye's iris[76][77] and the scattering of light by the turbid medium in the stroma of the iris.[78] In humans, the pigmentation of the iris varies from light brown to black. The appearance of blue, green, and hazel eyes results from the Rayleigh scattering of light in the stroma, an optical effect similar to that which accounts for the blueness of the sky.[78][79] The irises of the eyes of people with blue eyes contain less dark melanin than those of people with brown eyes, which means that they absorb less short-wavelength blue light, which is instead reflected out to the viewer. Eye colour also varies depending on the lighting conditions, especially for lighter-coloured eyes.

Blue eyes are most common in Ireland, the Baltic Sea area and Northern Europe,[80] and are also found in Eastern, Central, and Southern Europe. Blue eyes are also found in parts of Western Asia, most notably in Afghanistan, Syria, Iraq, and Iran.[81] In Estonia, 99% of people have blue eyes.[82][83] In Denmark 30 years ago, only 8% of the population had brown eyes, though through immigration, today that number is about 11%. In Germany, about 75% have blue eyes.[83]

In the United States, as of 2006, one out of every six people, or 16.6% of the total population, and 22.3% of the white population, have blue eyes, compared with about half of Americans born in 1900, and a third of Americans born in 1950. Blue eyes are becoming less common among American children. In the U.S., boys are 3-5 per cent more likely to have blue eyes than girls.[80]
Lasers

Lasers emitting in the blue region of the spectrum became widely available to the public in 2010 with the release of inexpensive high-powered 445-447 nm Laser diode technology.[84] Previously the blue wavelengths were accessible only through DPSS which are comparatively expensive and inefficient, however these technologies are still widely used by the scientific community for applications including Optogenetics, Raman spectroscopy, and Particle image velocimetry, due to their superior beam quality.[85] Blue Gas lasers are also still commonly used for Holography, DNA sequencing, Optical pumping, and other scientific and medical applications.
In nature

    Lactarius indigo, or the blue milk mushroom

    Cornflower

    Myosotis, or Forget-me-not

    Blue seeds of the Ravenala tree from Madagascar

    The Morpho peleides butterfly. The blue is caused by iridescence, the diffraction of light from millions of tiny scales on the wings. The colour is intended to frighten predators.

    River kingfisher

    Linckia Blue starfish

    Blue sapphire, a gemstone of the mineral corundum. Trace amounts of iron colour it blue; if there are traces of chromium instead, it has a red tint and is called a ruby.

    Dried crystals of copper sulphate
    Group of approximately 20 blue berries

    Blueberries
    A blue frog with black spots sits on a green leaf.

    Dendrobates azureus, the poison dart frog from Brazil. Its skin contains alkaloids which can paralyze or kill predators.
    A blue and white bird, with a crest on its head.

    Blue Jay

    A blue whale, the largest known animal to have ever existed, seen from above. The back is a pale blue grey.

Animals

    When an animal's coat is described as "blue", it usually refers to a shade of grey that takes on a bluish tint, a diluted variant of a pure black coat.[citation needed] This designation is used for a variety of animals, including dog coats, some rat coats, cat coats, some chicken breeds, some horse coat colours and rabbit coat colours. Some animals, such as giraffes and lizards, also have blue tongues.

In world culture

    In the English language, blue often represents the human emotion of sadness, for example, "He was feeling blue".
    In German, to be "blue" (blau sein) is to be drunk. This derives from the ancient use of urine, particularly the urine of men who had been drinking alcohol in dyeing cloth blue with woad or indigo.[86] It may also be in relation to rain, which is usually regarded as a trigger of depressive emotions.[87]
    Blue can sometimes represent happiness and optimism in popular songs,[88] usually referring to blue skies.[89]
    In German, a person who regularly looks upon the world with a blue eye is a person who is rather naive.[90]

    Blue is commonly used in the Western hemisphere to symbolise boys, in contrast to pink used for girls. In the early 1900s, blue was the colour for girls, since it had traditionally been the colour of the Virgin Mary in Western Art, while pink was for boys (as it was akin to the colour red, considered a masculine colour).[91]
    In China, the colour blue is commonly associated with torment, ghosts, and death.[92] In a traditional Chinese opera, a character with a face powdered blue is a villain.[93]
    In Turkey and Central Asia, blue is the colour of mourning.[92]
    The men of the Tuareg people in North Africa wear a blue turban called a tagelmust, which protects them from the sun and wind-blown sand of the Sahara desert. It is coloured with indigo. Instead of using dye, which uses precious water, the tagelmust is coloured by pounding it with powdered indigo. The blue colour transfers to the skin, where it is seen as a sign of nobility and affluence.[94] Early visitors called them the "Blue Men" of the Sahara.[95]
    In the culture of the Hopi people of the American southwest, blue symbolised the west, which was seen as the house of death. A dream about a person carrying a blue feather was considered a very bad omen.[92]
    In Thailand, blue is associated with Friday on the Thai solar calendar. Anyone may wear blue on Fridays and anyone born on a Friday may adopt blue as their colour.

    A man of the Tuareg people of North Africa wears a tagelmust or turban dyed with indigo. The indigo stains their skin blue; they were known by early visitors as "the blue men" of the desert.

As a national and international colour

Various shades of blue are used as the national colours for many nations.

    Azure, a light blue, is the national colour of Italy (from the livery colour of the former reigning family, the House of Savoy). National sport clubs are known as the Azzurri.
    Blue and white are the national colours of Scotland, Argentina, El Salvador, Finland, Greece, Guatemala, Honduras, Israel, Micronesia, Nicaragua and Somalia, are the ancient national colours of Portugal and are the colours of the United Nations.
    Blue, white and yellow are the national colours of Bosnia and Herzegovina, Kosovo and Uruguay.
    Blue, white and green are the national colours of Sierra Leone.
    Blue, white and black are the national colours of Estonia.[96]
    Blue and yellow are the national colours of Barbados, Kazakhstan, Palau, Sweden, and Ukraine.
    Blue, yellow and green are the national colours of Brazil, Gabon, and Rwanda.
    Blue, yellow and red are the national colours of Chad, Colombia, Ecuador, Moldova, Romania, and Venezuela.
    Blue and red are the national colours of Haiti and Liechtenstein.
    Blue, red and white are the national colours of Cambodia, Costa Rica, Chile, Croatia, Cuba, the Czech Republic, the Dominican Republic, France, Iceland, North Korea, Laos, Liberia, Luxembourg, Nepal, the Netherlands, New Zealand, Norway, Panama, Paraguay, Puerto Rico, Russia, Samoa, Serbia, Slovakia, Slovenia, Thailand, the United Kingdom, and the United States.
    Blue, called St. Patrick's blue, is a traditional colour of Ireland, and appears on the Arms of Ireland.

Politics
Main article: Political colour

    In the Byzantine Empire, the Blues and the Greens were the most prominent political factions in the capital. They took their names from the colours of the two most popular chariot racing teams at the Hippodrome of Constantinople.[97]
    The word blue was used in England the 17th century as a disparaging reference to rigid moral codes and those who observed them, particularly in blue-stocking, a reference to Oliver Cromwell's supporters in the parliament of 1653.
    In the middle of the 18th century, blue was the colour of Tory party, then the opposition party in England, Scotland and Ireland, which supported the British monarch and power of the landed aristocracy, while the ruling Whigs had orange as their colour. Flags of the two colours are seen over a polling station in the series of prints by William Hogarth called Humours of an election, made in 1754–55. Blue remains the colour of the Conservative Party of the UK today.

    By the time of the American Revolution, The Tories were in power and blue and buff had become the colours of the opposition Whigs. They were the subject of a famous toast to Whig politicians by Mrs. Crewe in 1784; "Buff and blue and all of you." They also became the colours of the American patriots in the American Revolution, who had strong Whig sympathies, and of the uniforms of Continental Army led by George Washington.[98]
    During the French Revolution and the revolt in the Vendée that followed, blue was the colour worn by the soldiers of the Revolutionary government, while the royalists wore white.
    The Breton blues were members of a liberal, anti-clerical political movement in Brittany in the late 19th century.
    The blueshirts were members of an extreme right paramilitary organization active in Ireland during the 1930s.
    Blue is associated with numerous centre-right liberal political parties in Europe, including the People's Party for Freedom and Democracy (Netherlands), the Reformist Movement and Open VLD (Belgium), the Democratic Party (Luxembourg), Liberal Party (Denmark) and Liberal People's Party (Sweden).
    Blue is the colour of the Conservative Party in Britain and Conservative Party of Canada.
    In the United States, television commentators use the term "blue states" for those states which traditionally vote for the Democratic Party in presidential elections, and "red states" for those which vote for the Republican Party.[99]
    In Québec Province of Canada, the Blues are those who support sovereignty for Quebec, as opposed to the Federalists. It is the colour of the Parti québécois and the Parti libéral du Québec.
    Blue is the colour of the New Progressive Party of Puerto Rico.
    In Brazil, blue states are the ones in which the Social Democratic Party has the majority, in opposition to the Workers' Party, usually represented by red.
    A blue law is a type of law, typically found in the United States and Canada, designed to enforce religious standards, particularly the observance of Sunday as a day of worship or rest, and a restriction on Sunday shopping.
    The Blue House is the residence of the President of South Korea.[100]

    An illustration by William Hogarth from 1854 shows a polling station with the blue flag of the Tory party and the orange flag of the Whigs.

    The blue necktie of British Prime Minister David Cameron represents his Conservative Party.

    A map of the U.S. showing the blue states, which voted for the Democratic candidate in all the last four Presidential elections, and the red states, which voted for the Republican.

Religion

    Blue is associated in Christianity generally and Catholicism in particular, with the Virgin Mary.[101][102][103]
    Blue in Hinduism: Many of the gods are depicted as having blue-coloured skin, particularly those associated with Vishnu, who is said to be the Preserver of the world and thus intimately connected to water. Krishna and Ram, Vishnu's avatars, are usually blue. Shiva, the Destroyer, is also depicted in light blue tones and is called neela kantha, or blue-throated, for having swallowed poison in an attempt to turn the tide of a battle between the gods and demons in the gods' favour. Blue is used to symbolically represent the fifth, throat chakra (Vishuddha).[104]
    Blue in Judaism: In the Torah,[105] the Israelites were commanded to put fringes, tzitzit, on the corners of their garments, and to weave within these fringes a "twisted thread of blue (tekhelet)".[106] In ancient days, this blue thread was made from a dye extracted from a Mediterranean snail called the hilazon. Maimonides claimed that this blue was the colour of "the clear noonday sky"; Rashi, the colour of the evening sky.[107] According to several rabbinic sages, blue is the colour of God's Glory.[108] Staring at this colour aids in mediation, bringing us a glimpse of the "pavement of sapphire, like the very sky for purity", which is a likeness of the Throne of God.[109] (The Hebrew word for glory.) Many items in the Mishkan, the portable sanctuary in the wilderness, such as the menorah, many of the vessels, and the Ark of the Covenant, were covered with blue cloth when transported from place to place.[110]

    Blue stripes on a traditional Jewish tallit. The blue stripes are also featured in the flag of Israel.

    Vishnu, the supreme god of Hinduism, is often portrayed as being blue, or more precisely having skin the colour of rain-filled clouds.

    In Catholicism, blue became the traditional colour of the robes of the Virgin Mary in the 13th century.

    The Bhaisajyaguru, or "Medicine Master of Lapis Lazuli Light", is the Buddha of healing and medicine in Mahayana Buddhism. He traditionally holds a lapis lazuli jar of medicine.

    In the Islamic World, blue and turquoise tile traditionally decorates the facades and exteriors of mosques and other religious buildings. This mosque is in Isfahan, Iran.

Gender

Blue was first used as a gender signifier just prior to World War I (for either girls or boys), and first established as a male gender signifier in the 1940s.[111]
Music

    The blues is a popular musical form created in the United States in the 19th century by African-American musicians, based on African musical roots.[112] It usually expresses sadness and melancholy.
    A blue note is a musical note sung or played at a slightly lower pitch than the major scale for expressive purposes, giving it a slightly melancholy sound. It is frequently used in jazz and the blues.[113]

    Bluegrass is a sub-genre of American country music, born in Kentucky and the mountains of Appalachia. It has its roots in the traditional folk music of the Scottish, and Irish.[114]

Associations and sayings

    Surveys in Europe and the United States regularly find that blue is the favourite colour of respondents, who associate it more than any other colour with sympathy, harmony, faithfulness, friendship and confidence. For example, a survey taken in Germany and published in 2009 found that blue was the favourite colour of 46 per cent of male respondents and 44 per cent of women.[5]
    True blue is an expression in the United States which means faithful and loyal.
    In Britain, a bride in a wedding is encouraged to wear "Something old, something new, something borrowed, something blue," as a sign of loyalty and faithfulness. A blue sapphire engagement ring is also considered a symbol of fidelity.[115]
    Blue is often associated with excellence, distinction and high performance. The Queen of the United Kingdom and the Chancellor of Germany often wear a blue sash at formal occasions. In the United States, the blue ribbon is usually the highest award in expositions and county fairs. The Blue Riband was a trophy and flag given to the fastest transatlantic steamships in the 19th and 20th century. A blue-ribbon panel is a group of top-level experts selected to examine a subject.
    A blue chip stock is a stock in a company with a reputation for quality and reliability in good times and bad. The term was invented in the New York Stock Exchange in 1923 or 1924, and comes from poker, where the highest value chips are blue.[116]
    Someone with blue blood is a member of the nobility. The term comes from the Spanish sangre azul, and is said to refer to the pale skin and prominent blue veins of Spanish nobles.[117]
    Blue is also associated with labour and the working class. It is the common colour of overalls blue jeans and other working costumes. In the United States "blue collar" workers refers to those who, in either skilled or unskilled jobs, work with their hands and do not wear business suits ("White collar" workers).
    Blue is traditionally associated with the sea and the sky, with infinity and distance. The uniforms of sailors are usually dark blue, those of air forces lighter blue. The expression "The wild blue yonder refers to the sky.
    Blue is associated with cold water taps which are traditionally marked with blue.
    Bluestocking was an unflattering expression in the 18th century for upper-class women who cared about culture and intellectual life and disregarded fashion. It originally referred to men and women who wore plain blue wool stockings instead of the black silk stockings worn in society.[117]
    Blue is often associated with melancholy- having the "blues".

    Madame Pompadour, the mistress of King Louis XV of France, wore blue myosotis, or forget-me-not flowers in her hair and on her gowns as a symbol of faithfulness to the King.

Sports

Many sporting teams make blue their official colour, or use it as detail on kit of a different colour. In addition, the colour is present on the logos of many sports associations.
The blues of antiquity

    In the late Roman Empire, during the time of Caligula, Nero and the emperors who followed, the Blues were a popular chariot racing team which competed in the Circus Maximus in Rome against the Greens, the Reds and Whites.[97]

    In the Byzantine Empire, The Blues and Greens were the two most popular chariot racing teams which competed in the Hippodrome of Constantinople. Each was connected with a powerful political faction, and disputes between the Green and Blue supporters often became violent. After one competition in 532 AD, during the reign of the Emperor Justinian, riots between the two factions broke out, during which the cathedral and much of the centre of Constantinople were burned, and more than thirty thousand people were killed.[118] (See Nika riots)

Association football

In international association football, blue is a common colour on kits, as a majority of nations wear the colours of their national flag. A notable exception is four-time FIFA World Cup winners Italy, who wear a blue kit based on the Azzuro Savoia (Savoy blue) of the royal House of Savoy which unified the Italian states.[119] The team themselves are known as Gli Azzurri (the Blues). Another World Cup winning nation with a blue shirt is France, who are known as Les Bleus (the Blues). Two neighbouring countries with two World Cup victories each, Argentina and Uruguay wear a light blue shirt, the former with white stripes. Uruguay are known as the La Celeste, Spanish for 'the sky blue one', while Argentina are known as Los Albicelestes, Spanish for 'the sky blue and whites'.[120]

Football clubs which have won the European Cup or Champions League and wear blue include FC Barcelona of Spain (red and blue stripes), FC Internazionale Milano of Italy (blue and black stripes) and FC Porto of Portugal (blue and white stripes). Another European Cup-winning club, Aston Villa of England, wear light blue detailing on a mostly claret shirt, often as the colour of the sleeves.[121] Clubs which have won the Copa Libertadores, a tournament for South American clubs, and wear blue include six-time winners Boca Juniors of Buenos Aires, Argentina. They wear a blue shirt with a yellow band across.

Blue features on the logo of football's governing body FIFA, as well as featuring highly in the design of their website.[122] The European governing body of football, UEFA, uses two tones of blue to create a map of Europe in the centre of their logo. The Asian Football Confederation, Oceania Football Confederation and CONCACAF (the governing body of football in North and Central America and the Caribbean) use blue text on their logos.
North American sporting leagues

In Major League Baseball, the premier baseball league in the United States of America and Canada, blue is one of the three colours, along with white and red, on the league's official logo. A team from Toronto, Ontario, are the Blue Jays. The Los Angeles Dodgers use blue prominently on their uniforms and the phrase "Dodger Blue" is may be said to describe Dodger fans' "blood". The Texas Rangers also use Blue prominently on their uniforms and logo.

The National Basketball Association, the premier basketball league in the United States and Canada, also has blue as one of the colours on their logo, along with red and white also, as does its female equivalent, the WNBA. The Sacramento Monarchs of the WNBA wear blue. Former NBA player Theodore Edwards was nicknamed "Blue". The only NBA teams to wear blue as first choice are the Charlotte Hornets and the Indiana Pacers; however, blue is a common away colour for many other franchises.

The National Football League, the premier American football league in the United States, also uses blue as one of three colours, along with white and red, on their official logo. The Seattle Seahawks, New York Giants, Buffalo Bills, Indianapolis Colts, New England Patriots, Tennessee Titans, Denver Broncos, Houston Texans, San Diego Chargers, Dallas Cowboys, Chicago Bears and Detroit Lions feature blue prominently on their uniforms.

The National Hockey League, the premier Ice hockey league in Canada and the United States, uses blue on its official logo. Blue is the main colour of many teams in the league: the Buffalo Sabres, Columbus Blue Jackets, Edmonton Oilers, New York Islanders, New York Rangers, St. Louis Blues, Toronto Maple Leafs, Tampa Bay Lightning, Vancouver Canucks and the Winnipeg Jets.

    The Italian national football team wear blue in honour of the royal House of Savoy which unified the country.

    The New Orleans Hornets, a National Basketball Association franchise from New Orleans, Louisiana, United States, wear blue as an away colour.

See also

    Blue Flag (disambiguation)
    Blue movie (disambiguation)
    Blue Screen of Death
    Blue (university sport)
    Distinguishing "blue" from "green" in language
    Engineer's blue
    List of colours
    Non-photo blue


 
Ender
| Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: EnderWolf1013
IP: Logged

10,296 posts
 
Faggot (slang)
From Wikipedia, the free encyclopedia
For other uses, see Faggot and Fag.
Page semi-protected

A Volkswagen Beetle ("Bug") owner in response to fag graffiti spray-painted on her car christened it "The Fagbug" and embarked on a trans-American road trip to raise awareness of homophobia and LGBT rights that was documented in a film of the same name.[1][2]
Faggot, often shortened to fag, is a pejorative term used chiefly in North America primarily to refer to a gay man.[3][4][5] Alongside its use to refer to gay men in particular, it may also be used as a pejorative term for a "repellent male" or a homosexual person of either gender.[5][6][7] Its use has spread from the United States to varying extents elsewhere in the English-speaking world through mass culture, including film, music, and the Internet.

Contents  [hide]
1 Etymology
2 Use in the United Kingdom
3 Early printed use
4 Pascoe's research on masculinity and high school
5 Use in popular culture
5.1 Theater
5.2 Books and magazines
5.3 Music
5.4 Television and news media
6 Bibliography
7 See also
8 References
9 External links
Etymology
The American slang term is first recorded in 1914, the shortened form fag shortly after, in 1921.[8] Its immediate origin is unclear, but it is based on the word for "bundle of sticks", ultimately derived, via Old French, Italian and Vulgar Latin, from Latin fascis.[8][9]

The word faggot has been used in English since the late 16th century as an abusive term for women, particularly old women,[9] and reference to homosexuality may derive from this,[8][10] as female terms are often used with reference to homosexual or effeminate men (cf. nancy, sissy, queen). The application of the term to old women is possibly a shortening of the term "faggot-gatherer", applied in the 19th century to people, especially older widows, who made a meagre living by gathering and selling firewood.[10] It may also derive from the sense of "something awkward to be carried" (compare the use of the word baggage as a pejorative term for old people in general).[8]

An alternative possibility is that the word is connected with the practice of fagging in British private schools, in which younger boys performed (potentially sexual) duties for older boys, although the word faggot was never used in this context, only fag. There is a reference to the word faggot being used in 17th century Britain to refer to a "man hired into military service simply to fill out the ranks at muster", but there is no known connection with the word's modern pejorative usage.[8]

The Yiddish word faygele, lit. "little bird", has been claimed by some to be related to the American usage. The similarity between the two words makes it possible that it might at least have had a reinforcing effect.[8][10]

There used to be an urban legend, called an "oft-reprinted assertion" by Douglas Harper, that the modern slang meaning developed from the standard meaning of faggot as "bundle of sticks for burning" with regard to burning at the stake. This is unsubstantiated; the emergence of the slang term in 20th-century American English is unrelated to historical death penalties for homosexuality.[8]

Use in the United Kingdom
Originally confined to the United States,[8] the use of the words fag and faggot as epithets for gay men has spread elsewhere in the English-speaking world, but the extent to which they are used in this sense has varied outside the context of imported U.S. popular culture. The words queer, homo, and poof are all still in common use in the UK, and some other countries, as pejorative terms for gay men. The words fag and faggot, moreover, still have other meanings in the British Isles and other Commonwealth societies. In particular, faggot is still used to refer to a kind of meatball, and fag is common as a slang word for "cigarette".

The terms fag/fagging, have been widely used for a practice of younger pupils acting as personal servants to the most senior boys for well over a hundred years in England, in the public school system of education.

Use of fag and faggot as the term for an effeminate man has become understood as an Americanism in British English, primarily due to entertainment media use in films and television series imported from the United States. When Labour MP Bob Marshall-Andrews was overheard supposedly using the word in a bad-tempered informal exchange with a straight colleague in the House of Commons lobby in November 2005, it was considered to be homophobic abuse.[11][12]

Early printed use
The word faggot with regard to homosexuality was used as early as 1914, in Jackson and Hellyer's A Vocabulary of Criminal Slang, with Some Examples of Common Usages which listed the following example under the word, drag:[13]

"All the fagots (sissies) will be dressed in drag at the ball tonight."
The word was also used by a character in Claude McKay’s 1928 novel Home to Harlem, indicating that it was used during the Harlem Renaissance. Specifically, one character says that he cannot understand:

"a bulldyking woman and a faggoty man"
Pascoe's research on masculinity and high school
Through ethnographic research in a high school setting, CJ Pascoe examines how American high school boys use the term fag. Pascoe's work suggests that boys in high school use the fag epithet as a way to assert their own masculinity, by claiming that another boy is less masculine; this, in their eyes, makes him a fag, and its usage suggests that it is less about sexual orientation and more about gender. One-third of the boys in Pascoe's study claimed that they would not call a homosexual peer a fag; fag is used in this setting as a form of gender policing, in which boys ridicule others who fail at masculinity, heterosexual prowess, or strength. Because boys do not want to be labeled a fag, they hurl the insult at another person. The fag identity does not constitute a static identity attached to the boy receiving the insult. Rather, fag is a fluid identity that boys strive to avoid, often by naming another as the fag. As Pascoe asserts, "[the fag identity] is fluid enough that boys police their behaviors out of fear of having the fag identity permanently adhere and definitive enough so that boys recognize a fag behavior and strive to avoid it". Pascoe's study reports that gender policing is most common among white boys, while black boys are more concerned with "acting" appropriately black. The black youth in Pascoe's study often ridiculed one another for "acting white", and did not express gender policing to the same degree as white boys.[14]

Use in popular culture

Benjamin Phelps, Fred Phelps' grandson and creator of the first "GodHatesFags" webpage, is also from the Westboro Baptist Church which regularly employs picket signs such as these using fag as epithet.[15]
There is a long history of using both fag and faggot in popular culture, usually to denigrate lesbian, gay, bisexual, and transgender (LGBT) people. Rob Epstein and Jeffrey Friedman's 1995 documentary The Celluloid Closet, based on Vito Russo's book of the same name notes the use of fag and faggot throughout Hollywood film history.[16] The Think Before You Speak (campaign) has sought to stop fag and gay being used as generic insults.[17]

Theater
In 1973 a broadway musical called "The Faggot" was praised by critics but condemned by gay liberation proponents.[18]

Books and magazines
Larry Kramer's 1978 novel Faggots discusses the gay community including the use of the word within and towards the community.[19] In its November 2002 issue, the New Oxford Review, a Catholic magazine, caused controversy by its use and defense of the word in an editorial. During the correspondence between the editors and a gay reader, the editors clarified that they would only use the word to describe a "practicing homosexual". They defended the use of the word, saying that it was important to preserve the social stigma of gays and lesbians.[20]

Music
Arlo Guthrie uses the epithet in his 1967 signature song "Alice's Restaurant". noting it as a potential way to avoid military induction at the time.[21] The Dire Straits 1985 song "Money for Nothing" makes notable use of the epithet faggot,[22] although the lines containing it are often excised for radio play, and in live performances by singer/songwriter Mark Knopfler. The song was banned from airplay by the Canadian Broadcast Standards Council in 2011 but the ban was reversed later the same year.[23] In 1989, Sebastian Bach, lead singer of the band Skid Row, created a controversy when he wore a t-shirt with the parody slogan "Aids: Kills Fags Dead".[24] The 2001 song "American Triangle" by Elton John and Bernie Taupin uses the phrase God hates fags where we come from.. The song is about Matthew Shepard, a Wyoming man who was killed because he was gay.[25] The 2007 song The Bible Says, which includes the line "God Hates Fags" (sometimes used as an alternate title) caused considerable controversy when it was published on various websites. Apparently an anti-gay song written and performed by an ex-gay pastor "Donnie Davies", it was accompanied by the realistic Love God's Way website about his "ministry". Debate ensued about whether Donnie Davies and the outrageous song, which included a few double entendres, were for real, and whether the lyrics could ever be considered acceptable even in satire. Donnie Davies was revealed in 2007 to be a character played by actor and entertainer. Some gay rights advocates acknowledge that as a spoof it is humorous, but claim the message behind it is still as malicious as someone who seriously possessed the opinion.[26][27][28] In December 2007, BBC Radio 1 caused controversy by editing the word faggot from their broadcasts of the Kirsty MacColl & The Pogues song "Fairytale of New York", deeming it potentially homophobic; however, the edit did not extend to other BBC stations, such as BBC Radio 2. Following widespread criticism and pressure from listeners, the decision was reversed and the original unedited version of the song was reinstated, with clarification from Andy Parfitt, the station controller, that in the context of the song the lyrics had no "negative intent".[29][30] Patty Griffin uses the word faggot in her song "Tony" about a classmate of hers from high school who committed suicide.[31]

Television and news media
In 1995, former House Majority Leader Dick Armey referred to openly gay congressman Barney Frank as "Barney Fag" in a press interview.[32] Armey apologized and said it was "a slip of the tongue". Frank did not accept Armey's explanation, saying "I turned to my own expert, my mother, who reports that in 59 years of marriage, no one ever introduced her as Elsie Fag".[33]

In July 2006 conservative pundit Ann Coulter, while being interviewed by MSNBC's Chris Matthews, said that the former U.S. Vice President Al Gore was a "total fag", and suggested that former U.S. President Bill Clinton may be a "latent homosexual".[34] Coulter caused a major controversy in the LGBT community; GLAAD and other gay rights organizations demanded to know the reason why such an offensive usage of the word was permitted by the network. In March 2007, Coulter again created controversy when she made an off-color joke: "I was going to have a few comments on the other Democratic presidential candidate John Edwards, but it turns out you have to go into rehab if you use the word 'faggot', so I'm kind of at an impasse, can't really talk about Edwards".[35][36] Her comments triggered a campaign by a gay rights group and media watchdog to persuade mainstream media outlets to ban her shows and appearances.

In October 2006, Grey's Anatomy star Isaiah Washington called his co-star T. R. Knight a "faggot" on the set during an argument with Patrick Dempsey. According to Knight, the incident led to him publicly coming out of the closet.[37] Washington made another outburst using the epithet, this time backstage at the Golden Globe Awards. In January 2007, Washington issued a public apology for using the word faggot and went into rehab to help him with what the show's creator Shonda Rhimes referred to as "his behavioral issues".[38]

In November 2009, the South Park episode "The F Word" dealt with the overuse of the word fag. The boys use the word to insult a group of bikers, saying that their loud motorcycles ruin everyone else's nice time. Officials from the dictionary, including Emmanuel Lewis attend in the town and agree that the meaning of the word should no longer insult homosexuals but instead be used to describe loud motorcycle riders who ruin others' nice times. The episode is commentary on the overuse of certain terms like fag and gay. [39][40][41][42]

Bibliography
Pascoe, C. J. Dude, You're a Fag: Masculinity and Sexuality in High School, University of California Press, 2007.
Kramer, Larry. Faggots, Grove Press, 2000.
Ford, Michael Thomas. That's Mr. Faggot to You: Further Trials from My Queer Life, Alyson Books, 1999.
See also
   Look up faggot in Wiktionary, the free dictionary.
Breeder
Fag hag
Fag stag
Freedom of speech
Hate mail
Hate speech
References
Jump up ^ Berk, Brett (January 8, 2009). "The Heartwarming Story of Fagbug". Vanity Fair. Retrieved July 1, 2009.
Jump up ^ Raymundo, Oscar (December 19, 2007). "Driven to Spread Awareness". Newsweek. Retrieved December 13, 2008.[dead link]
Jump up ^ "Faggot". Reference.com. Retrieved November 16, 2013.
Jump up ^ 2008, Paul Ryan Brewer, Value war: public opinion and the politics of gay rights, page 60
^ Jump up to: a b The American Heritage Dictionary of the English Language, Fourth Edition. Houghton Mifflin. 2000. ISBN 0-618-70172-9.
Jump up ^ Spears, Richard A. (2007). "Fag". Dictionary of American Slang and Colloquial Expressions. Retrieved 21 December 2011.
Jump up ^ Studies in Etymology and Etiology, David L. Gold, Antonio Lillo Buades, Félix Rodríguez González - 2009 page 781
^ Jump up to: a b c d e f g h Harper, Douglas. "Faggot". The Online Etymological Dictionary. Retrieved 2009-11-22.
^ Jump up to: a b "Faggot". The Oxford English Dictionary.
^ Jump up to: a b c Morton, Mark (2005), Dirty Words: The Story of Sex Talk, London: Atlantic Books, pp. 309–323
Jump up ^ "MP's 'faggot' abuse 'disgraceful'". LGBTGreens. Retrieved 2009-11-22.
Jump up ^ Helm, Toby; Jones, George (11 November 2005). "Panic and a punch-up as Blair tumbles to defeat at the hands of his own party". The Daily Telegraph (London). Archived from the original on 2007-10-14. Retrieved 2009-11-21.
Jump up ^ Wilton, David / Brunetti, Ivan. Word myths: debunking linguistic urban legends Oxford University Press US, 2004. Page 176. ISBN 0-19-517284-1, ISBN 978-0-19-517284-3
Jump up ^ Pascoe, CJ (2007), Dude, You're a Fag: Masculinity and Sexuality in High School, Berkley and Los Angeles, California: University of California Press
Jump up ^ The Bible Exposition Commentary: New Testament: Volume 1 (1992), Warren W. Wiersbe, David C. Cook, ISBN 1-56476-030-8, ISBN 978-1-56476-030-2
Jump up ^ The Celluloid Closet; (1995) Rob Epstein and Jeffrey Friedman.
Jump up ^ 'That's So Gay': Words That Can Kill Susan Donaldson James, ABC News, 20 April 2009.
Jump up ^ Clive Barnes (August 4, 1973). "US unisex: continuing the trend". The Times. p. 7. "The theme of The Faggot is set at the beginning which shows ... one man picking up another in a movie house."
Jump up ^ Larry Kramer (2000). Faggots. Grove Press. ISBN 978-0-8021-3691-6. Retrieved 2009-11-22.
Jump up ^ "Sodom & the City of God". Cityofgod.net. Retrieved 2009-11-22.
Jump up ^ Guthrie, Arlo (1967). "Alice's Restaurant Massacree" (lyrics). Alice's Restaurant. Retrieved from the official Arlo Guthrie web site November 26, 2013. "And if two people, two people do it, in harmony, they may think they're both faggots and they won't take either of them."
Jump up ^ Mark Knopfler a Bigger Gay Icon Than George Michael? Ten reasons why. Mike Sealy, Seattle Weekly, July 01, 2008.
Jump up ^ Canada Lifts Ban on Dire Straits' 'Money for Nothing'
Jump up ^ Michael Musto. "La Dolce Musto", village voice, 2000.
Jump up ^ "Rewriting the Motives Behind Matthew Shepard’s Murder". [1]. December 8, 2004. Retrieved 2009-11-23.
Jump up ^ "The Latest!". The Washington Blade. 29 January 2007. Archived from the original on September 28, 2007. Retrieved 2007-02-02.
Jump up ^ "Dan Savage, "Slog"". The Stranger. 28 January 2007. Retrieved 2007-02-02.
Jump up ^ "One Big Conn: When Viral Marketing Misses Its Mark". Philadelphia Weekly. 31 January 2007. Archived from the original on February 10, 2007. Retrieved 2007-02-02.
Jump up ^ "Radio 1 censors Pogues' Fairytale". BBC News. 18 December 2007. Retrieved 2009-11-22.
Jump up ^ "Radio 1 reverses decision to censor Pogues hit"3071042.ece". Times Online (London).[dead link]
Jump up ^ "Patty Griffin on the Cayamo Cruise". significatojournal.com.
Jump up ^ "The Masters of Mean". 1 March 2002.
Jump up ^ Rich, Frank (February 2, 1995), Journal; Closet Clout, The New York Times
Jump up ^ "When hate speech becomes accepted" The Advocate.
Jump up ^ "John Edwards Hopes to Raise 'Coulter Cash' After Commentator's 'Faggot' Comment – Politics | Republican Party | Democratic Party | Political Spectrum". FOXNews.com. 4 March 2007. Retrieved 2009-11-22.
Jump up ^ "Broadcast Yourself". YouTube. Retrieved 2009-11-22.
Jump up ^ Nudd, Tim (17 January 2007). "Isaiah Washington's Slur Made Me Come Out – Grey's Anatomy, Isaiah Washington". People. Retrieved 2009-11-22.
Jump up ^ E! News – Isaiah Enters Treatment – Isaiah Washington | T.R. Knight | Patrick Dempsey[dead link]
Jump up ^ "South Park episode guide". South Park Studios. 2 November 2009. Retrieved 2009-11-02.
Jump up ^ Jones, Michael A. (November 6, 2009). "Should South Park Get Away with Using the F-Word?". GayRights.Change.org. Retrieved 2009-11-18.
Jump up ^ Genevieve Koski (November 4, 2009). "The F Word". The A.V. Club. Retrieved 2009-11-07.
Jump up ^ "GLAAD protests 'South Park' f-bomb episode". James Hibberd's The Live Feed. November 5, 2009. Retrieved 2009-11-07.
External links


 
Ender
| Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: EnderWolf1013
IP: Logged

10,296 posts
 
Music Genome Project
From Wikipedia, the free encyclopedia
Recommender systems
Methods and challenges
Cold start Collaborative filtering Dimensionality reduction Implicit data collection Preference elicitation Relevance (information retrieval) Similarity search
Implementations
Collaborative search engine Content Discovery Platform Decision support system Music Genome Project Product finder
Research
GroupLens Research Netflix Prize
v t e
The Music Genome Project was first conceived by Will Glaser and Tim Westergren in late 1999. In January 2000, they joined forces with Jon Kraft to found Savage Beast Technologies to bring their idea to market.[1] The Music Genome Project is an effort to "capture the essence of music at the most fundamental level" using over 450 attributes to describe songs and a complex mathematical algorithm to organize them. The Music Genome Project is currently made up of 5 sub-genomes: Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical. Under the direction of Nolan Gasser and a team of musicological experts, the initial attributes were later refined and extended.

A given song is represented by a vector containing values for approximately 450 "genes" (analogous to trait-determining genes for organisms in the field of genetics, although it has been argued that this methodology bears greater resemblance to phylogeny[2]). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, prevalent use of groove, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–450[3] genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 0 and 5, in half-integer increments.[4] The Music Genome Project's database is built using a methodology that includes the use of precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high.[3]

Given the vector of one or more songs, a list of other similar songs is constructed using what the company calls its "matching algorithm". Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song.[5] Ten percent of songs are analyzed by more than one musician to ensure conformity with the in-house standards and statistical reliability.

The Music Genome Project was developed in its entirety by Pandora Media and remains the core technology used to program its online radio stations in response to its users' desires. Although there was a time when the company licensed this technology for use by others, today they limit its use for just their own users.

Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA, Australia or New Zealand[6] by Pandora's geolocation software.[7]

Contents  [hide]
1 Intellectual property
2 See also
3 References
4 Further reading
5 External links
Intellectual property[edit]
"Music Genome Project" is a registered trademark in the United States. The mark is owned by Pandora Media, Inc.[8]

The Music Genome Project is covered by United States Patent No. 7,003,515.[4] This patent shows William T. Glaser, Timothy B. Westergren, Jeffrey P. Stearns, and Jonathan M. Kraft as the inventors of this technology. The patent has been assigned by the holders to Pandora Media, Inc.

The full list of attributes for individual songs is not publicly released, and ostensibly constitutes a trade secret.

See also[edit]
Moodbar
MusicBrainz
Pandora Radio
WhoSampled
References[edit]
Jump up ^ Westergren, Tim (March 9, 2009). VV Show #54 - Tim Westergren of Pandora. Interview with Greg Galant. Venture Voice. Retrieved 2011-06-26.
Jump up ^ http://phylonetworks.blogspot.com/2013/03/the-music-genome-project-is-no-such.html
^ Jump up to: a b "About The Music Genome Project". http://www.pandora.com/. Retrieved 17 August 2014.
^ Jump up to: a b Music Genome Project US Patent: No. 7,003,515
Jump up ^ Ike, Elephant (February 2006). "Tiny Mix Tapes: Tim Westergren Interview". Retrieved 30 May 2013.
Jump up ^ Notification email sent to Australian mailing list subscribers
Jump up ^ Pandora FAQ #79[dead link]
Jump up ^ "Music Genome Project" US Trademark: No. 2731047 United States Patent Office
Further reading[edit]
Castelluccio, Michael (December 2006), The Music Genome Project, Strategic Finance 88 (6): 57–58, ISSN 1524-833X
Jennings, David (2007), Net, Blogs and Rock 'N' Roll: How Digital Discovery Works and What it Means for Consumers, Creators and Culture, London, UK; Boston, MA: Nicholas Brealey Pub., ISBN 978-1-85788-398-5, OCLC 145379643
John, Joyce (September 2006), Pandora and the Music Genome Project, Scientific Computing 23 (10): 14, 40–41, ISSN 1930-5753, retrieved 2008-08-03
Walker, Rob (October 14, 2009). "The Song Decoders at Pandora". New York Times. Retrieved November 23, 2012.
External links[edit]
"The Music Genome Project" — short historical statement by Tim Westergren
Patent Number 7003515 — Consumer item matching method and system
Inside the Net Interview with Tim Westergren of Pandora Media
Interview with Tim Westergren March 23, 2007
Interview with Tim Westergren about the Music Genome Project and Pandora video
The first music of genes by Jean-claude Perez 1994 SACEM GEN0694
Categories: Online music and lyrics databases


Idi Amin | Ascended Posting Frenzy
 
more |
XBL:
PSN:
Steam:
ID: Revofev
IP: Logged

374 posts
 

Pegboy
From Wikipedia, the free encyclopedia
Pegboy
Pegboy.jpg
Live in 2011
Background information
Origin   Chicago, Illinois
Genres   Pop punk
Years active   1990-present
Labels   Quarterstick
Associated acts   Naked Raygun, Bhopal Stiffs
Website   Myspace
Members   John Haggerty, Joe Haggerty, Larry Damore, Mike Thompson
Past members   Steve Saylors, Steve Albini, Pierre Kezdy
Pegboy is an American punk band from Chicago, Illinois with a relatively large cult following. They were founded in 1990 by John Haggerty (ex-guitarist for Naked Raygun), along with his brother Joe Haggerty (drums, formerly of The Effigies), Larry Damore (vocals/guitar), and Steve Saylors (bass). Both Damore and Saylors had been members of Chicago-based hardcore band Bhopal Stiffs, whose 1987 demo had been produced by John Haggerty. Pegboy's 1990 debut EP, "Three-Chord Monte", was also the first release by Quarterstick Records, an off shoot of Touch and Go Records. Steve Saylors dropped out in 1992 after job commitments prevented him from touring. Steve Albini, a longtime friend of the band, filled the bass slot on the "Fore" EP. Former Naked Raygun bassist Pierre Kezdy became the permanent bass player in 1994. After the reformation of Naked Raygun, Mike Thompson took over for Kezdy on bass.[1]

Pegboy supposedly played a "farewell" show on New Year's Eve in 1999[2] but then denied that it was really a "farewell" show a few years later when they returned to live action.[3]

Pegboy has been touring through the summer of 2009 with Face to Face (punk band) and Polar Bear Club.

Rise Against's Tim McIlrath,[4] Alkaline Trio's Matt Skiba,[5] as well as Shai Hulud's Matt Fox are big Pegboy fans.

Contents  [hide]
1 Current members
2 Former Members
3 Discography
3.1 Albums
3.2 Singles and EPs
4 Reception
5 References
6 External links
Current members[edit]
Larry Damore — Vocals, Guitar (1990–present)
Joe Haggerty — Drums (1990–present)
John Haggerty — Guitar (1990–present)
"Skinny" Mike Thompson — Bass (2007–present)
Former Members[edit]
Steve Saylors — Bass (1990-1992)
J. Robbins — Bass (1992, temporary replacement for the Social Distortion tour)[6]
Steve Albini — Bass (1993 — on Fore)
Pierre Kezdy — Bass (1994-2007 )
Discography[edit]
Albums[edit]
1991 - Strong Reaction (LP, CD)[7]
1994 - Earwig (LP, CD)[8]
1997 - Cha Cha Damore (LP, CD)[9]
Singles and EPs[edit]
1990 - Three-Chord Monte (EP)[10]
1991 - "Field of Darkness"/"Walk on By" (EP)
1993 - Fore (EP, CD)[6][11]
1996 - Dangermare (Split with Kepone) (EP)
Reception[edit]
"With roots in such seminal Chicago bands as Naked Raygun and Effigies, Pegboy sounds as if it would have been right at home during the punk upheaval of the late `70s." (Greg Kot, Chicago Tribune, 1991)[12]
"A barrage of industrial-strength noise from the North blasted through Liberty Lunch on Saturday, when the Jesus Lizard and Pegboy combined with Kepone for a galvanizing concert that brought their autumn tour to a close. All three record for Chicago's fiercely independent Touch and Go combine, which specializes in abrasive guitars over relentless rhythms and a minimum of melody." (Don McLeese, Austin American-Statesman, 1994)[13]
"The band has a knack for writing anthemic choruses in the tradition of guitarist John Haggerty`s former band, Naked Raygun."(Review of Strong Reaction, Greg Kot, Chicago Tribune, 1991)[7]
"This workmanlike band inherits the Chicago muscle 'n' melody tradition of Naked Raygun." (Review of Earwig, Greg Kot, Chicago Tribune, 1995)[8]
References[edit]
Jump up ^ "Pegboy Profile". Chicago Tribune. 1991-10-25. Retrieved 2011-08-26.
Jump up ^ Reger, Rick (1999-12-31). "Pegboy Has Two Reasons For Singing `Auld Lang Syne'". Chicago Tribune. Retrieved 2011-08-26.
Jump up ^ Reger, Rick (2002-04-12). "Pegboy back to doing what they love". Chicago Tribune. Retrieved 2011-08-26.
Jump up ^ Corazza, Kevin. "Tim McIIrath interview". Archived from the original on 2013-02-15. Retrieved 2009-08-26.
Jump up ^ Paul, Aubin (2006-03-22). "Date set for Pegboy tribute with Matt Skiba, Vic Bondi, Nine Lives, The Invisibles". Punknews.org. Retrieved 2011-08-26.
^ Jump up to: a b Fore - Pegboy at AllMusic
^ Jump up to: a b Kot, Greg (1991-10-24). "Strong Reaction (Quarterstick)". Chicago Tribune. Retrieved 2011-08-26.
^ Jump up to: a b Kot, Greg (1995-01-05). "Pegboy Earwig (Quarterstick)". Chicago Tribune. Retrieved 2011-08-26.
Jump up ^ "PEGBOY Raygun trickles down to this". Fort Worth Star-Telegram. 1998-02-06. Retrieved 2011-08-26. "Yeah it's just another Pegboy record, says bassist Pierre Kezdy, when asked about the group's latest scorcher Cha Cha Da More"
Jump up ^ Corcoran, Michael (1991-05-24). "Chicago's Pegboy a safe bet to make it to the big time". Chicago Sun-Times. Retrieved 2011-08-26.
Jump up ^ Jenkins, Mark (1993-12-03). "Pegboy Suited Only to a Tee". The Washington Post. Retrieved 2011-08-26.
Jump up ^ Kot, Greg (1991-10-28). "Punk's passion, minus the violence, propels Pegboy". Chicago Tribune. Retrieved 2011-08-26.
Jump up ^ McLeese, Don (1994-12-20). "Jesus Lizard's northern noise blows into Liberty Lunch". Austin American-Statesman. Retrieved 2011-08-26.
External links[edit]
Touch and Go/Quarterstick Records
Official Myspace page
[hide] v t e
Pegboy
Larry Damore Joe Haggerty John Haggerty Mike Thompson
Pierre Kezdy Steve Saylors
Studio albums   
Strong Reaction Earwig Cha Cha Damore
Extended plays   
Three-Chord Monte Fore
Related articles   
Articles   
Quarterstick Records
Groups   
Bhopal Stiffs The Effigies Naked Raygun
Categories   
Albums
Categories: American punk rock groupsQuarterstick Records artists
Navigation menu
Create accountLog inArticleTalkReadEditView history

Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Wikimedia Shop
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact page
Tools
What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Wikidata item
Cite this page
Print/export
Create a book
Download as PDF
Printable version
Languages
Deutsch
Edit links
This page was last modified on 18 January 2014 at 18:33.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policyAbout WikipediaDisclaimersContact WikipediaDevelopersMobile viewWikimedia Foundation Powered by MediaWiki


 
Ender
| Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: EnderWolf1013
IP: Logged

10,296 posts
 
Encyclopedia dramatica
Timothy McVeigh
What?   This article needs moar pictures.
You can help by adding moar pictures.


Seriously, McVeigh really was a true American Patriot against gun control. However despite the impressive damage and seductive display of power that he brought to a clusterfuck of federal agents and their children, Timothy still somehow died a hapless virgin. Seriously. Truly unfair.
Timothy McVeigh (moar like Timothy McYAY!, amirite?), was an American responsible for the deadliest act of terrorism in the US prior to the 9/11 attacks. He didn't commit suicide after the American people were treated to the display of his well thought out redneck fuckery, possibly because he was too busy enjoying the response he found after doing it. He was instead captured and taken into federal custody and executed, causing the American public to ban fertilizer and cry for only a few months before going back to their usual daily shit.
Another important fact is that he also quickly recognized his talents in explosives, rather than firearms, and claimed a much larger bounty for Satan by showing future terrorists how it should really be done. Thus proving that even delusional patriotic conservative white trash can be as dangerous as a garden variety towelhead given a life of unemployment, social rejection, and ultimately, sexual frustration.

Contents
1 Relevance
1.1 Theodore Kaczynski on McVeigh
2 Early Life
3 Professional Training
4 Post Army
4.1 Poverty
5 PHYS 101 -- How do I make Bomb?
6 Showing the Middle East how it's Done
6.1 The Bombing
7 See Also
Relevance

What makes Tim unique is that he was somehow able to carry his rage into his adult years and intelligently decided to go apeshit in the real world rather than the insular worlds that most younger High Scores contestants usually live in. He currently holds the high score for "Bomberman" campaign mode and to this day, he's still a fine example of what happens when you patiently decide to let your rage build up to be released later in life, years or decades after you've graduated from school entirely. Learn from him as a brilliant example to go for much higher scores.
Though the world is eagerly waiting to move forward, Timothy McVeigh's legacy is kept alive by countless amateur manifestos written by people looking for attention. Because of this, don't expect to see his name die anytime soon.
The scandalous nature of his attack led to much controversy afterward with many people citing him in their whitepapers and details about his attack, parallels to his political beliefs, and all around tossing his name around to get people to notice their own shitty attacks so that they could ride on the still fleshy anal fistula that was and still is, to this day, the butthurt caused by Tiny Tim.
Theodore Kaczynski on McVeigh
    
“   
On a personal level I like McVeigh and I imagine that most people would like him. He was easily the most outgoing of all the inmates on our range of cells and had excellent social skills. He was considerate of others and knew how to deal with people effectively. He communicated somehow even with the inmates on the range of cells above ours, and, because he talked with more people, he always knew more about what was going on than anyone else on our range.
    „
    
— He's actually a pretty cool guy
Aldous Huxley's IRL "John Savage" Theodore Kaczynski and Timothy McVeigh met briefly in 1999, and letters piled in to Kaczynski from radical college students and journalists to ask him what the eligible bachelor was really like. Kaczynski responded in his typical manner of talking more about himself and his wildlife euphoria than of the person who everyone really cared about, and after sifting through Kaczynski's TL;DR about why prison has ruined his own superhuman mountain man spidey sense, one can see in this letter that McVeigh was actually revealed to be a pretty easy person to get along with.

Early Life

He was allegedly bullied in school. He became a tryhard hick despite technically being a Northerner, and he spent his youth raging at the government like most 15 year olds do. Over time, he developed an unhealthy obsession with guns. He later cited that in his teenage years, he repeatedly failed to get laid, which made him more upset than even his delusions against America's government.

Professional Training

Timothy McVeigh enlisted in the US Army after doing poorly in college. He decided to wave goodbye to his family and then traveled down to Fort Benning, Georgia to finally find a political climate that suited him. He performed well in basic training and was known to be highly knowledgeable in all things firearm, though he couldn't get over his shit-poor aim and couldn't do anything more than just brandish weapons that he was terrible at using. It was already clear that he wouldn't be able to skillfully use firearms to do anything useful, so the eager Timothy McVeigh began to fascinate himself in making explosives.
Yet despite his apparent acceptance of a known Islamic tactic, Timothy McVeigh contrarily became an active member of the local White Power movement, which innocently began as a small movement to control and keep the Army's league of highly trained fighter chimps in their place, given the recent integration of them into society and updated status as subhuman instead of primate, and the rights that the filthy apes were now demanding.
Though the movement was mainly charitable and benign in conception, it was later shut down by superiors on his base for not being part of the liberal agenda that purported the belief that all races are the same, a philosophy that was unacceptable to McVeigh's rural education. Timothy McVeigh was then reprimanded for this and that undoubtedly lead to even more seething anger against the government unjustly telling him what to do.
He won a few achievements that hardly anyone would honestly give a shit about: decorating himself for killing sand-niggers overseas, such as accidentally decapitating a sand-nigger with a heavy weapon, then executing a line of camelfuckers with an M9. His achievements led him to desire further outlets in the military, but a psychological profile deemed him too unstable to join the Special Forces. Facing another failure, McVeigh claimed to have ordered a bottle of champagne and later drank to leaving the military on New Year's Eve. Also relevant, Timmy was believed to have still been a virgin.

Post Army

Poverty
After leaving the subsidized tit of US taxpayer money, he then realized that he had to find work. This was problematic to Tim because it meant he had to ACTUALLY take responsibility for his abysmal life: something he calmly avoided by putting the blame on others. A couple of Army officers remembered him from the service and decided to pull a joke on the trailer trash, so after looking him up, they sent him a bill saying that he owed them $1,058.00 in cash to the US Army, payment due by the end of the month.

    
“   
Go ahead, take everything I own; take my dignity. Feel good as you grow fat and rich at my expense; sucking my tax dollars and property.
    „
    
—LOL
This was the final straw for the twenty-something year old Timothy and he decided to campaign across the country voicing his hatred at the government. But had he simply just quit being as much of a backwoods gun-toting asshole and actually catered to the opposite gender by learning the social skills toward women he so neglected in high school, he could have easily gotten laid, which would have finally put the fucker at peace. It was that fucking simple. He later admitted while in custody that the one thing that irritated him more than the USA's increasingly threatening government was his unsatisfied sex drive.
After quitting the NRA because the rational-minded members there actually wanted him to STFU about his pointless shit and desire to cause a civil war against America, he accused them of supporting gun control and further decided to parade around the country constantly yelling at people at gun shows and events to take his mind off of the truth, then beginning to hit up Home Depot, Lowes, other hardware stores to make cheap explosives.

PHYS 101 -- How do I make Bomb?

Rent a rental truck and fill it with fertilizer. Do it fgt.
McVeigh's favorite formula seemed to be copper/brass pipes with Ammonium Nitrate pellets from freezer packs or fertilizer, doused with motor oil to get ANFO, with the pipe surrounded by Copperhead BB's. At the very least, he was bright enough to realize that black powder is a fucking shit choice to use for home made explosives because it can actually lodge itself in the pipe threading and deflagrate an explosive train back into the pipe before you can even use it on its intended target, and that black/galvanized steel with a low explosive doesn't fragment as well as a high explosive in a less stiffened brass pipe due to the Young's moduli of brass being much smaller than that of steel, leading to sharper fragments and a more complete rupture. The addition of copperheads with the brass pipe's outer coating as shrapnel leads to some nice material homogeneity since copperheads and copper pipes are relatively similar, which prevents static buildup between the pipe, charge, and shrapnel and all-around creates a safer pipe bomb. (For those of you who want to try this at home).

Showing the Middle East how it's Done

The Bombing
Mcvtimeline4.png
It's believed that after getting more bomb training from Terry Nichols, who himself was actually a pretty cool guy with a wife and family and who also shared Timmy's learned love for inferior mud-races, the two hicks quickly grinded McVeigh's skillset in Bomb Making to it's upper limit using a Nitrogen based fertilizer in their shared EXP Farm. Within a few months McVeigh had obtained the godlike OP needed to devastate the jimmies of the American public in a single attack.


Maturely taking the death of thousands of people seriously, McVeigh drew his blueprints on the back of a napkin. Lulz.
The two purchased unhuman amounts of AN fertilizer, which, surprisingly didn't set off a red flag to the authorities because it was generally believed that an American citizen wouldn't do dastardly deeds with such a fun supply of lulzfuel; and the fact that they were both hicks just led the authorities to think they were just farming with it. They packed a crude bomb within the back of a rental truck, remembering to Duct Tape the slapper detonator properly so that it didn't fuck up at compile-time, avoiding a mistake that Reb and Vodka would later make while making their shitty propane bombs. Leaving their trailer park behind, the two men set their sights on a federally funded building, packed to the brim with hundreds of evil children who supported our increasingly threatening government. Timmy initiated the slapper and lulz ensued.
More than 300 buildings were damaged. More than 12,000 volunteers and rescue workers took part in the rescue, recovery and support operations following the bombing. McVeigh covered up the guilt of his buddy, Nichols, and tried to initially take credit for the entire thing himself. Investigators had doubts though, as a man as unstable as McVeigh couldn't have orchestrated something like this without having some assistance.

    
“   
To these people in Oklahoma who have lost a loved one, I'm sorry but it happens every day. You're not the first mother to lose a kid, or the first grandparent to lose a grandson or a granddaughter. It happens every day, somewhere in the world. I'm not going to go into that courtroom, curl into a fetal ball and cry just because the victims want me to do that.
    „
    
— Timothy McVeigh, exhibiting Troll's Remorse
When he was finally caught, it was because they noticed a bulge in his clothing, which was revealed to be an illegally owned firearm. Ultimately it was Tim's mouth foaming obsession with guns that put him away, coupled with the fact that he left his license plate on the rental truck he used to commit the bombing, so he had to drive back plateless much to the suspicion of the local law enforcement.


Your average bomber
Some argue that his reason for using bombs was that he chose to inflict the most possible damage that he could on anything related to the government. Others argue that he was a fucking patriot who knew that if he used guns to do this, the liberals would just go after those and after all nobody could honestly give a shit about bans on fertilizer unless they were a fucking Saudi, Paki, Habib, or a farmer.
But as stated above though, the main reason why he chose to use bombs was really just his goddamn terrible aim, which can be seen in his epileptic drawn map of the building. He knew what worked for him and successfully yielded massive butthurt in the act of doing so, the incident of which would later be referenced by copycat killers looking for attention and lulz.


 
True Turquoise
| MILF Hunter
 
more |
XBL: Anora Whisper
PSN: True_Turquoise
Steam: truturquoise
ID: True Turquoise
IP: Logged

25,382 posts
fuck you
Moron may refer to:

Moron (psychology), disused term for a person with a mental age between 8 and 12
A common insult for a person considered stupid (or just a generic insult)
Places:

Moron (ancient city), mentioned by the Greek geographer Strabo
Morón, Buenos Aires, a city in Greater Buenos Aires, Argentina
Roman Catholic Diocese of Morón, Argentina
Morón Partido, a district in Buenos Aires Province
Morón, Cuba, a city in Cuba
Moron, Grand'Anse, a municipality of Haiti
Mörön, a town in Mongolia
Mörön, Khentii, a district of Khentii Province in eastern Mongolia
Morong, Bataan, a municipality in the Philippines formerly known as Moron
Morón, Venezuela, a town in northern Venezuela
Moron, later renamed Taft, California, a city
Moron (mountain), in the Jura Mountains
Morón Air Base, in Morón de la Frontera, Spain
Moron Lake, a lake in Alaska
Lac de Moron, a lake on the border between France and Switzerland
Other uses:

Morón (surname), people so named
Moron Phillip (born 1992), Grenadian footballer
"Moron" (Sum 41 song)
"Moron" (KMFDM song)
Moron (bacteriophage), an extra gene in prophage genomes that do not have a phage function in the lysogenic cycle
Moron (Book of Mormon), a name and a location in the Book of Mormon


| Heroic Posting Rampage
 
more |
XBL:
PSN:
Steam:
ID: Latsu15
IP: Logged

1,664 posts
 
Nazi Germany is victorious in World War 2, as a result of the mass Soviet defeat at Kursk and Velikiye-Luki in 1943, and then the conquest of the rest of Soviet Russia. In 1944 Great Britain is invaded and after six months of brutal fighting surrenders to the Axis forces. In the summer of 1945 Japan is defeated by the USA. However, the Atomic bombs are not used due to fear of reprisals by the already nuclear powered Reich. By then the Axis and Allies became war-weary and sign a peace deal in the winter of 1946. In the post war years, Germany focuses on technological and modernist advances, including architecture and the focus on the elimination of poverty, hunger and disease for the Greater German Reich.

Adolf Hitler
Adolf Hitler.
Contents[show]
Early HistoryEdit
World War 2 was to become a conflict of truly "globalized" proportions that was was fought in Europe, Africa, Asia, North America, as well as on the Mediterranean, Atlantic and Pacific Oceans.

A major incident happened when Adolf Hitler gave Heinz Guderian the go ahead to advance on the British troops and obliterate the BEF at Dunkirk out of strategic necessity, during "Operation Heute Europa".

Allthistory London blitz victim
A victim of the 1940 St Katharine Docks and the Tower of London raid during London Blitz.
The preparations for the invasion, code named 'Operation Sea lion' were made, whilst the RAF was still in ruins. Most were either destroyed in northern France and the Battle of Britain, the Luftwaffe gain total air supremacy in 1940. Two other major events were the Soviet defeats at the 1942-43 Battle for Velikiye-Luki and the 1943 Battle of Kursk (this was due to the localised use of nerve gas). The Soviet victory at Stalingrad seemed a likely turning point in the Eastern front. However, this was checked by the decisive German victory at Kursk in 1943.


The world soon tired and the mood in Canada, Australia and the USA was for peace in Europe.

Peace in 1945Edit
The Axis and Allies formally sign the peace treaty of Moscow which allowed both sides who were left i.e., Germany, America and Canada to rule peacefully in their respective zones. But America and Germany continued to compete with each other for power and technological advances.

The Third ReichEdit
Deutsches Reich1
Member states of the core German Empire (Prussia shown in blue).
The Greater German Reich (Groß-Deutschland) was the largest empire in the world. It ruled over three continents, all of Europe, North Africa, all of the Middle East ( except the Arabian Peninsula), the Persian Gulf, the entire Indian sub-continent and most of Siberian Russia. Jews, Homosexuals, Slavs, and Marxists/Communists had largely been exterminated, except those who were kept as slaves and also those who hid or who continued to fight in rumpled rebel armies. Deep in the Eastern Front the fleeing remnants of the Red Army continued to wage a successive guerrilla war against the Third Reich, inflicting massive casualties on Wehrmacht. Arabs, Iranians and all Muslims in general were classed as Aryan and many would be invited to go and study in German universities and vice versa. The Italian Empire was dissolved while Germany took over its African empire, the Italians were treated well, too. However, they are always kept an eye on.

Living StandardsEdit
Beneschau skyline
Beneschau/ Benešov in Bohemia
The citizens of the Third Reich enjoy extremely high-living standards and live in complete luxury. There is no need for them to work as this is all taken up by thousands of Slav, Communist, homosexual slaves, and some Jewish slaves. Material needs are largely satisfied, as products are produced by the harsh slave labour such as televisions, cars, expensive big houses and also rich holiday locations are massively available, particularly luxury resorts in Spain, Greece, Turkey, Italy and Cyprus. Education is very strongly encouraged and is compulsory up to university. The world's best scientists and professors are German and earn their degrees at Oxford University or the Reich University. Essentially by 1964 poverty, famine, disease had been largely eliminated and the Reich enjoyed a very peaceful haven. German living standards are matched only by those in America. The Third Reich had indeed the richest and most luxurious lands to live in. The Fuhrer himself owned hundreds of estates and private resorts all of the empire, his most famous estates being in Malta, Sicily and Athens.

The Major Cities of the ReichEdit
The largest cities in the Greater German Reich are as follows

Berlin 1956
Berlin after the Completion of the Volkshalle in 1955.
GermanyEdit
Berlin (the fifth largest city in the World)
Hamburg
Smolensk
Nürnberg
Königsberg
München
Wien
Danzig
Krakau
Riga
Sewastopol
Gotenburg
Kiev
Autonomous Eastern Ukrainian region.Edit
Dnipropetrowsk/ Дніпропетровськ (Ukrainian)
Autonomous West Russian region.Edit
Moskau/ Москва́ (Russian)
St Petersburg/ Ленинград (Russian)
Protectorate of Bohemia-MoraviaEdit
Beneschau/ Benešov (Czech)
Greater BerlinEdit
Berlin was renamed "Greater Berlin" and was now the richest city in the world. It was the headquarters of the Grand German army, the home of the Reichstag and also the living quarters of Hitler’s Gigantic Palace which was guarded at all times, by 2000 of Hitler’s personal highly skilled bodyguards. With the help of Albert Speer, Berlin was

Amazing-Architecture-Building-City-Palace-Spiral-Tower-luxury-building-design
Greater Berlin at night
transformed into a model city and became well ahead of its time. It had the best universities, restaurants, five star hotels, military academies, Opera houses and beautiful architecture. It was, in fact, a diverse population ranging from all Aryans, Arabs and Iranians, etc.


Nazi Soldiers by Eco Flex
Nazi soldiers in Siberia, 1954
The Grand German ArmyEdit
The Wehrmacht had the largest land, sea and air force in the world. The manpower of the land forces formed over 2.6 million, and another three million as reserves. Since 1956 Hitler allowed non-Germans directly into the main army. From occupied France there were 800,000 active Frenchmen in the Wehrmacht, from Britain there are 600,000, from Russia 850,000 and hundreds from the smaller countries such as Denmark and Holland. This was aside from the original Waffen-SS. The German SS was now the police force of the Third Reich and they carefully patrolled the streets day and night, all citizens feared them. The German 6th army was among its legendary and become the Fuhrer's personal armed force.

TechnologyEdit
Ho-18
Luftwaffe 1957.
Me362
Luftwaffe 1957.
It was hyper-advanced for its time by 1949. It was sent straight to the Eastern Front, striking at major Soviet positions. It had the ability to kill thousands of troops and civilians.

GermanizationEdit
Hitler never launched a major genocide against Western Europe like it did in the East, and so after that Hitler decided it would be a good idea to assimilate the Western European countries into the Greater Reich - not by directly annexing them but by Germanizing them. In France, German became the first and official language.

IdeologyEdit
728759
A post War-modern Swastika
The Nazi ideology remained very much the same after the war. Jews, homosexuals, Communists, free masons, Slavs, Gypsies and blacks were, of course, non-Aryans, and therefore had to be eliminated. However, it is true that some aspects were slightly relaxed by Hitler. Instead of complete extermination of certain non-Aryan races, the Fuhrer adopted the ancient Roman tactic of assimilation and absorption. Though it seemingly would have taken longer, it proved quite successful, especially in the East, with the remaining Slavs, Ukrainians, Tatars and the rest of the Soviet peoples. The Soviet prisoners of war were re-educated and put into German service. The Islamic power of the Turkish Empire and its allies destroyed any aspect of war in the East, and the Hindu-Indians plus Iranians were classed as Ancient Aryans and allowed to visit German schools and universities.

Nuclear ArmsEdit
The Third Reich began testing for atomic technology way before the US. However, throughout the war not one single atomic bomb was dropped.

Political relationsEdit
Political relations with the USAEdit
After the war, relations between the two superpowers remained bitter, and both struggled against each other in arms races, technology and military build-ups. However, at times Germany and the United States had shown softness towards each other the most popular example was in 1961 a joint United States-German Special Forces team invaded and helped the anti-Communist Cuban rebels overthrow Fidel Castro's despotic Marxist regime.

Political relations with RomaniaEdit
The new Fascist Iron Guard regime had firmly set the country on the course towards the Axis camp, officially joining forces with the Axis Powers on 23 November, 1940.

Political relations with South AmericaEdit
In South America German influence began to spread rapidly and like wildfire, Fascist dictators were seizing power through coups or popular revolutions by 1953. The countries of Bolivia, Argentina and Chile had all become Fascist countries with a strong pro-German stance.

Political relations with TurkeyEdit
Turkey joined the Axis forces late in the war just as Iran did. Like its German counterpart it had every reason for revenge and conquest for its defeat in World War 1. Nationalists attempted to overthrow the Ottoman monarch in a revolution in 1923 but failed miserably, its leader Mustafa Kemal was hanged publicly. Despite the victory the empire including its name was now simply reduced as "Turkey" no longer being seen as a great power in the worlds eyes. When Turkey finally joined the war the Sultan "Abdul Majid" initiated a massive army build up in the Turkish ranks. He surprisingly mustered a strong force of over one million strong Turkish soldiers, and reformed the navy and air force.

Imperial JapanEdit
The Japanese Empire had been militarily defeated. However, the German victory in Europe and the now the order by Germany to stop the US from further war against Japan now frustrated the US to end the war. On the front lines in the pacific the Japanese had actually halted the United States with their partial victory at Iwo Jima thanks to the ingenious tactics initiated by General Tadamichi Kuribayashi and begin a retreat finally coming to terms with a partial defeat from being at war with Japan. It withdraw and halted attacks against the still Japanese occupied territories , due to the already generational loss of its troops in the war. The atomic bombs were never used due to fear of a strike from Nazi Germany which had already possessed nuclear arms. Emperor Hirohito knowing this full well did not order a surrender in 1945 but simply continued to be in a state of war. Thus reparations and rebuilding were issued immediately, The Chinese meanwhile continued to wage war on Imperial forces still in-fighting in China and in Korea. The Japanese in a matter of months were a able to form a strong anti-invasion army made up of both men and women, civilian and military. Thus the US army initiated withdrawal from the pacific to concentrate on fighting in Europe. It successfully reoccupied Okinawa, Taiwan and most of Korea, igniting it into a very long war with China. The Imperial Army did nearly their impossible with these swift reconquests,

(Eye of the Storm) japan-empire

Rebellions - Post War PeriodEdit
The post-war period saw a series of uprisings in the occupied lands, including France, Britain, Italy, Romania, parts of Russia and Ukraine. In Britain, riots broke in 1945, protesters were shot which led to even more riots. In France the people of Paris rose up against the occupation forces killing hundreds of thousands of Nazi troops. Denmark was over-run in 1940 with little fuss by the Third Reich and peacefully annexed for 46 years. It chose not to rebel afterwards since it was a fellow Ayanite nation.

Generalplan OstEdit
The 'eastern territories' were to be 'assimilated'. The plan was to Germanise the existing people, non-Slavs who could eventually adapt and become part of the Aryan race. Also Hitler encouraged extreme German colonization on the Eastern front. In 1948, 300,000 German civilians from Bavaria were transported to Kiev, thousands of German workers and farmers were sent. As a reward to German troops masses of land were given to them in Russia for those that wished for it. The oil fields of Asia made sure that the Ostland was highly developed and could stand on its on feet. Soviet defectors of the "Russian liberation army" were also given impoverished farming land. Russia during these periods can often be seen as a mirror to Colonial British America in the 1600 and 1700s.

AfricaEdit
All of Africa virtually became Hitler's with the conquest of Europe in 1947. However, Germany did not have political liability to rule the entire continent as America and China heavily disliked the idea of one power ruling an entire continent and its people. Nevertheless Hitler still carved himself a gigantic Southern African empire "Afrika Reich" consisting of the former... Belgian Congo, British Tanzania, Rhodesia, Namibia, Kenya, Vichy French Madagascar and Portuguese Angola. Within these colonies secret but mass genocide began of the native populations through the regular usages of shooting, starvation and creating divisions among the native tribes. Africa was rich in minerals and resources, mass colonisation was made and with the help of turning the natives into slaves, this made the labour shortages never a problem for the colonists/farmers. These laws and acts of vicious brutality led to many long native rebellions within the German African Empire, which became known as the "Colonial Wars" essentially lasting from 1947-1960.

The Soviet - German WarEdit
The War in the east officially ended in October 1945, with the Soviet stronghold at Omsk finally crushed by the mighty German war machine. However, this only meant that the Soviets would never again be able to directly confront the Reich, but they still had the ability to lead a nightmarish guerrilla campaign against the German-allied forces. Stalin and most of the politburo and military generals fled far into Siberia. With the destruction of Omsk, led to the death of Joseph Stalin and many of the high-ranking politicians and generals, Hitler assumed that Soviet Russia and all the Marxist ideology of Communism had been once and for all destroyed however, remnants of the fledgling Red army fled deep into the east of Asia, and regrouped at the city of Novolbrisk. For the
Stalingrad
German soldiers searching through the debris of a destroyed rebel strong-hold, January 1946
time being it would be there headquarters and the de facto capital of the remnant Soviet Union. Having regrouped and rearmed, the remaining forces waged a guerrilla campaign which in time would result in devastating consequences for the Third Reich. From sabotage, hit and run attacks, assassinations of military leaders, to assaulting German bases and killing hundreds of Axis soldiers. The German high command estimated there were only about 500 of these so called "rebels", so continued to send inexperienced soldiers to the front. These soldiers were mostly from the ROA "Russian Liberation Army" defectors. Hitler didn’t think that Germans needed to shed blood just for the sake of killing "worthless bandits", and so sent Russians to kill Russians. In time it proved him wrong by 1958, almost the entire ROA was destroyed and now the Red army was launching terror attacks inside the Third Reich.
495px-2jh
Germans on the temporary retreat near Kazakhstan during 1945.
The United States and The PLA secretly funded and supplied the Soviet forces with weapons, ammunition, and even tanks and planes. Soon it the support got even stronger with the PLA and United States special forces were being sent into Siberia to help train and participate in combat disguised as Russian soldiers. Hitler had seriously underestimated the Soviets remaining strength and by 1960 200,000 German troops had been killed. Though the war was what Hitler fantasized for the German people as something that would keep them on their toes, it began to prove very costly. German soldiers quickly attempted to modify their tactics and execute civilians after every guerrilla attack but this proved futile and only created more hatred for the Reich.

Zrussianstalingrad1
Red army troops somewhere in Siberia, 1949
The German army’ brutal methods of killing civilians, building concentration camps and destroying towns and villages failed to dismantle rebel Soviets. What was even more worrying for the German people were the inside terror attacks the best known example was when Sgt Romanov and his famous rebel unit launched an attack inside the Ukrainian occupied city of Zhytomry, they coordinated a successful attack which resulted in the destruction of an ammo dump, an entire army barrack, the killing of at least 45 German soldiers and 15 German colonists. Radical members of the Red army were using more so called "effective means" against the Reich. Such as Suicide bombing against civilian targets, cities and towns, televised executions, assignations, and other forms of terror tactics. Officially the German High Command no longer treated Russian rebel forces as crack down rebel units, but as regular well trained military forces of the Red army.
549px-Order of the Raven
A Russian militia group in Irkutsk in 1961.
And it was true, as the population regrew, thousands joined militias and regulars, fighting on the front line became intense.
Lebensraum
The Push back of the Third Reich's frontier in Russia, 1960
Kamikaze and Suicide squadrons were put in great effect during the fifties. Its main aim was to cause massive amounts of harm to the enemy while inflicting minimal harm to the attackers. By the 60s suicide warfare died out due to the large rebuilding of the Red army they now had the ability to directly face the Nazi war machine once again in open field. The Germans had since 1946 used their own suicide squadrons, modeled after the Kamikaze fighters of Imperial Japan.
The Sino-German WarEdit
The Communists of China defeated the Nationalists forces in 1949 and established the "People’s Republic of China" based on the principles of Communism. Hitler thought he had gotten rid of Communism once and for all, with the destruction of the Soviet Union in 1945. However, the Chinese Red Army under leader Mao Zedong had played a major role in the Sino-Japanese war and were extremely powerful. During the civil war Germany made no attempts to intervene due to distraction in attempting to
Warchina
A district of Beijing in ruins after German bombardment, 1950
crush several uprisings in Europe. China like the Soviet Union before it was the only Communist state in the world, and was the primary ideological enemy of Nazi Germany. Early outbreaks of fighting broke out on the borders of German Russia and China in 1950. Germany sent two armoured divisions across the Mongolian border into China, killing 101 of Chinese troops with 24 hours. The PRC declared war on the German Reich. The United States at this point had maintained an isolationist policy and did not interfere in the conflict directly. However, fearing that they become the only superpower to face Germany alone, it sent conventional arms and means. The Xinjiang region was invaded opening up a second front for the German army, 20,000 soldiers and three panzer divisions were sent. However, much to Hitler’s surprise, they encountered massive resistance and within four hours the invasion force was halted, and faced horrible casualties, by 400,000 strong of the PLA. The German army had much more success in Inner Mongolia, surrounding Chinese forces and destroying hundreds of fortifications sent up by them. Resulting in the death of 4000 troops.
Chinese communist forces
PLA forces storming German lines in Inner Mongolia, 1950
The Soviet rebels at this time decided to take advantage of the situation and aid their Chinese allies, by harassing troop’s supplies to the Germans, en route to the Chinese front this enraged Hitler and his armies. With Inner Mongolia occupied the German forces continued their push to Beijing and began aerial bombardment similarly to Japanese assaults of Chinese cities during World War 2. By the time they stopped bombardment and eventually sent troops to the capture the city, the PLA had already strongly re-inforced the capital and as a result 50, 000 German troops were killed and fewer than 100 PLA forces died. Meanwhile, the Chinese forces at Xinjiang counter-attacked occupied Inner Mongolia and destroyed the German forces their leading to a massive retreated back into German Russia. Hitler wanted to begin a nuclear strike but by then the United States spoke out in favour of the PRC, and so he backed down, it was the first official defeat of Nazi Germany and the war had only lasted two months from April 3 1950 - June 2. Both countries' forces retreated to pre-war borders.

TerrorismEdit
The Nazi empire had been the target of terrorism, hijackings, kidnappings, assassinations, murder. The primary terrorists were Jewish resistance fighters and Marxists. The first wave of terrorism was on June 6 1951 when terrorists of the so called "Sword of Jehovah" hijacked a commercial airliner and landed it in Bethlehem, with the help of Arab fighters and contingents of SS troops, the hostages were rescued and all the terrorists were killed. The second wave w
Terrorists copy
Extremists of the "Sword of Jehovah", group were infamous in their attacks against Germany.
as in early 60s, by extreme Marxists. In 1963 ten terrorists of "the workers social revolution of France" bombed and attacked the "Kaiser International airport" in Paris, killing over one hundred people and injuring another 45. With attacks like these Germany introduced anti-terror laws and established specific programmes and prevents schemes against international terrorism.
German Indochina WarEdit
The Communists of Vietnam rose from the ashes of former Japanese rule and were bent on establishing a revolutionary government in Indochina.

The British Occupation WarEdit
The British Empire truly came to an end in 1947. King George fled to Canada along with Winston Churchill. The Second Battle of Britain was the toughest war yet for the German Reich. Every citizen was demanded to shed blood for the final defence of the empire. The young, the weak the old and even women, Germany lost 850, 000 troops and Britain over one million. King Edward was installed as king in Buckingham Palace and Oswald Mosley was released from prison to lead the new British Fascist government. For occupation forces Hitler devised thousands of French and Belgian SS divisions and
History nazi britain by danzig au-d2yf0i3
Nazi Occupied Britain, 1947
played up traditional almost between the French and British. Despite crushing the nation a British resistance movement was formed compromising remnants of the British army and normal civilians.
The Italo-German WarEdit
The Duce had grown tired of his nation being a mere puppet at the hands of the German Reich and began to seek military options instead of political ones to achieve a higher status within the world. So in secret he started a mass military build up and by 1953 there were two million Italian soldiers ready to go to war against the Third Reich, the plan was to invade the oilfields of the Middle East to swiftly conquer Vichy France and capture Vienna within two months, by then the Germans would have suffered horrible casualties and Italy would maybe be joined by the United States of America. On May 19th 1953, 500,000 Italian troops launched a pre-emptive strike against Vichy France they quickly overran the puppet country. However, they were swiftly stopped at Tours and reversed back with massive deaths. They regrouped at Toulouse but were surrounded by German and French troops and were forced to give up. In the Eastern front the Italians reached Vienna with four days of hard fighting but found it was heavily defended and were also confronted by a large German-Austrian force. They were defeated and soon the Italians were being driven back all the way to Italy. Losing over 600,000 men. Over Italian Libya which the Italians had been fighting for a very long time, the Germans began aerial bombardment of Tripoli and Benghazi.

Caucasus RebellionEdit
Christian Armenians and Georgians had longed for freedom from the Soviet Union, and felt no freedom when the German armies liberated their lands, as did the Muslim Chechens. Partisan groups emerged in 1946, they begin a guerrilla war against the occupation forces.

Indian RebellionEdit
The jewel of the British Empire, the Indian sub-continent was subdued by Nazi Germany a year after the fall of the British Isles itself. With few British and colonial troops the remnants of Great Britain in India made its last stand at Hyderabad 1948, and India was subjected to a brutal, harsh occupation. Originally the Germans did not intend for this as it viewed Indians as Aryans just as themselves. However, Mohandas K Gandhi saw no reason to give up the fight for independence even though there had been a major change of occupiers. He continued his non-violent policies of non-cooperation. However, the Nazis were not as tolerant as the British had been and in 1948 they stormed his residence and hanged his body on display. It led to an immediate uproar from among the Hindu and community but not the Muslim community who had been quite cooperative with the German occupiers. The Hindu and Sikh peoples soon began to riot and in an incident stormed a German checkpoint killing an officer and five soldiers, as a result re-inforcements arrived and opened fire on the rioters killing 107 people. The Hindu-Indians all over India declared open war against the German occupiers, many which formed the ILA "Indian Liberation Front*. During this time the Nazis began religious persecution demolishing temples, places of worship and shooting Hindu priests publicly, furthering the fuel for rage from the Hindus. In India the German occupation force consisted of 300,000 troops, re-inforcements would arrive from Isfahan in Iran but it would take weeks, so the German colonial government decided to establish Muslim SS divisions similarly done in Serbia and Bosnia, in order to prevent the country being overrun. The ILA marched to occupied Delhi; German troops despite being heavily outmanned, successfully defended the city killing 2000 ILA soldiers and only lost 300.

World War 3Edit
A brief naval war which occurred between Nazi Germany and the United States, in the early 1960s at sea with several naval engagements, neither side gained the upper hand. It quickly came to halt before further conflict could have erupted by 1961.

Slave labourEdit
The Third Reich had an ocean of unlimited man power and used this as their main structure of economy and workforce. Homosexuals, Gypsies, Slavs, Communists, POWs, Africans, and others the Germans considered "sub-human". Jews were never even considered for working and slavery no matter what the cost or reason. This harsh slavery had ultimately freed the Reich’s citizens from labour. From the harshest to the fair working jobs were all done by the slaves. From time to time there were uprisings and rebellions by them. However, they were always severe and surreally put down in the most sinister way. By 1960 there were over two million slaves working under the German Reich.

Space ExplorationEdit
Port-voy
The VSS Fatherland, the first star ship ever made in 2012.
As time went by Hitler began to look to the stars, and into space. A Nazi-solar empire was what began to run through his mind, it excited his entire body. And so established the first "German space programme" in 1946 and in 1961 the first space ships were sent out, successfully making landings on the moon and Mars. Hitler poured in massive amounts of money into the space programme and in 1995 with the ability to create oxygen on Mars and the moon, the first colonies were established, the colonies served both military and civilian purposes. "The Nazi Space Command" was established as the official body of government for the space colonies, Hitler had invasion a Intergalactic Nazi Space Empire and introduced this idea heavily with the command. In case if Germany had first contact with aliens a strong military presence was established in the colonies. However, this was just a propaganda rumour which was in secret for keeping the colonies disciplined by the colonial armies.
33097
The first models German Space Colonial troops, 1983
by 1991 the designs for first space star ship were made, the "VSS Fatherland" and was officially launched in 2012 which made continuous successive journeys through Venus, Neptune, the moon and Mars. Colonization by the mid-70s was a complete craze. Mars's capital city "New Berlin" had massive populace of over 300,000 colonists. Venus also had a large metropolis called "New Hamburg" its population was overall 600,000 Slaves were also transported in order to begin mine exploitations and to help build the fabulous space cities.
Slave Uprising in SpaceEdit
618658-bigthumbnail
The first space colony established on Mars in 1971, "New Berlin"
The conditions of slaves in the space mining facilities were appalling and especially on Mars where probably the worst ever been. They would be made to work continues long hours and if they went out of line once there was no warnings, only executions. All these conditions led to a massive uprising in 1975, when workers revolted and attempted to escape the mining core, and seize the colony of Mars. Upon the colony of Mars there was a military garrison of only 30,000 troops, the civilian population marked about 500,000 and the slave population was 860,000.
New SwabiaEdit
Essentially turned into a German military base after the war it served as a colony in later years and Hitler ordered its expansion.

Death of HitlerEdit
Adolf Hitler died in 1973 from natural causes in his luxury home in Greece at the ripe old age of 84. He had the official title of "Hitler der Große[Hitler the Great]" his successor was none other than Heinrich Himmler. A national day of mourning for ten days was proclaimed. The world’s largest and most expensive statue of Hitler was erected at Greater Berlin entitled - "Adolf Hitler the great, Hero of the new Germany".

Major Wars, Battles and RetreatsEdit
The London Biltz
The 1940 St Katharine Docks, and the Tower of London raid
The London Biltz
1942 Battle of Gazala
1942 Battle of Bir Hakeim
The 1942 Battle of Tobruk
The First 1942 Battle of El Alamein
The Second 1942 Battle of El Alamein
1942 Battle of Voronezh
1942 Battle for Pskov
1942-43 Battle for Velikiye-Luki
Battle of Kursk 1943: The localised use of nerve gas turns the tide in the Soviet Union.
The 1943 Battle of Voronezh
The 1944 Black Sea War
1944 Romanian rebellion
The 1944 Belgrade Rising
1944 Iraqi Rebellion
1944-45 Murmansk Siege
1944-45 Battle of the Hürtgen Forest
1944 Battle of Hilversum
The 1944 Sochi Raid
1944 Battle Of Aachen
The 1944-45 Battle Of The Bulge
1944 Battle For Vianden Castle
1945 Kazakhstan border War
1945 Timişoara rebellion
1945 Urals War
1945 Battle of Omsk
1945 Battle of Chita
1946 Hertford Rebellion
The 1946 Batumi Raid
1946 Kiev Rebellion
1946 Poti Raid
The 1946 Kuban Incident
1946 Battle For Bahrain.
The 1946 Battle for Qatar
The 1946 Battle For Oman
1946 Trucial States Rebellion.
1946 Battle for Saudi Arabia
The 1949 Omsk Rebellion
The Battle for Moscow
Second Battle of Stalingrad
1946 London Ghetto Uprising
1947 Battle of Novosibirsk


Solonoid | Mythic Inconceivable!
 
more |
XBL: Jx493
PSN: Jx493
Steam: Jx493
ID: Solonoid
IP: Logged

13,452 posts
 
Kyle Katarn is a fictional character in the Star Wars Expanded Universe, who appears in the five video games of the Jedi Knight series, the video game Star Wars: Lethal Alliance, and in several books and other material. In the Jedi Knight series, Katarn is the protagonist of Star Wars: Dark Forces and Star Wars Jedi Knight: Dark Forces II, one of two playable characters in Star Wars Jedi Knight: Mysteries of the Sith, the protagonist of Star Wars Jedi Knight II: Jedi Outcast and a major NPC in Star Wars Jedi Knight: Jedi Academy.
Katarn was originally a member of the Galactic Empire, before becoming a mercenary for hire. He regularly worked for the Rebel Alliance and later became a member of the New Republic as well as a skilled Jedi and an instructor at the Jedi Academy, second only to Luke Skywalker.
Katarn has been well received by most critics, with GameSpot including him in a vote for the greatest video game character of all time, where he was eliminated in round two, when faced against Lara Croft.[1]
Contents  [hide]
1 Appearances
1.1 Jedi Knight series
1.2 Star Wars literature
1.3 Other appearances
2 Development and depiction
3 Reception
4 References
5 External links
Appearances[edit]
Jedi Knight series[edit]
Katarn first appeared in Star Wars: Dark Forces, where he was introduced as a former Imperial officer who became a mercenary-for-hire after learning the Empire was responsible for the death of his parents.[2] As a mercenary, he regularly worked for the Rebel Alliance, where he was secretly dispatched by Mon Mothma on missions deemed too dangerous or sensitive for actual Rebel operatives. The game begins shortly before the events of the film A New Hope, with Katarn single-handedly infiltrating an Imperial facility on the planet Danuta to retrieve the plans for the first Death Star. The plans would eventually be forwarded to Princess Leia, leading to the destruction of the Death Star.[3] One year later, Katarn is employed to investigate the "Dark Trooper" project, a secret Imperial research initiative manufacturing powerful robotic stormtroopers to attack Alliance strongholds. After several adventures (including encounters with Jabba the Hutt and Boba Fett), Katarn terminates the Dark Trooper Project and kills its creator, General Rom Mohc, aboard his flagship, the Arc Hammer.[4]
Star Wars Jedi Knight: Dark Forces II takes place one year after the events of the film, Return of the Jedi.[5] It begins with 8t88, an information droid, telling Katarn about the Dark Jedi Jerec, who killed Katarn's father, Morgan, in his efforts to find the Valley of the Jedi, a focal point for Jedi power and a Jedi burial ground. 8t88 also tells Katarn of a data disk recovered from Morgan after his death which can only be translated by a droid in Morgan's home. After 8t88 leaves Katarn to be killed, Katarn escapes, tracks down 8t88 and recovers the disk. He then heads to his home planet of Sulon and has the disk translated. The disk contains a message from Morgan, telling Katarn his must pursue the ways of the Jedi, and giving him a lightsaber. Katarn also learns that seven Dark Jedi are attempting to use the power found in the Valley to rebuild the Empire. Kyle eventually kills all seven Dark Jedi and saves the Valley.[6]
Star Wars Jedi Knight: Mysteries of the Sith, an expansion pack for Dark Forces II, takes place approximately five years later.[7] The game focuses on former Imperial assassin Mara Jade, who has come under Kyle's tutelage as she trains to be a Jedi. During this period, while investigating Sith ruins on Dromund Kaas, Kyle comes under the influence of the Dark Side of the Force, but Jade is able to turn him back to the Light.[8]
Star Wars Jedi Knight II: Jedi Outcast is set three years after Mysteries of the Sith.[9] Feeling vulnerable to another fall to the Dark Side, Kyle has chosen to forsake the Force and has returned to his former mercenary ways.[10] Whilst on a mission for Mon Mothma, Kyle's partner, Jan Ors is apparently murdered at the hands of two Dark Jedi, Desann and Tavion. Determined to avenge her death, Katarn returns to the Valley of the Jedi to regain his connection to the Force. Taking back his lightsaber from Luke Skywalker, he sets out to track down Desann. After escaping from a trap with Lando Calrissian's help, Katarn heads to Cloud City and interrogates Tavion, who tells him that Jan is not dead at all. Desann simply pretended to kill her knowing Katarn would return to the Valley, at which point Desann followed him so as to infuse his soldiers with the Force and reinstall the Imperial Remnant as rulers of the galaxy. Katarn spares Tavion's life and stows away on Desann's ship, the Doomgiver. After rescuing Jan, Katarn defeats the military scientist, Galak Fyyar, who tells him that Desann plans to use his Jedi infused soldiers to attack the Jedi Academy on Yavin IV. Katarn enters the Academy and defeats Desann. After the battle, he tells Luke Skywalker that he is going to stay a Jedi, confident of his strength and dedication to the Light Side.[11]
Star Wars Jedi Knight: Jedi Academy takes place a year after Jedi Outcast,[12] and is the first game in the series in which Katarn is not a playable character. The game begins as he is appointed master of two new students, Jaden Korr and Rosh Penin. Rosh soon begins to feel held back and comes to resent Katarn. It is soon discovered that a Sith cult named the Disciples of Ragnos are stealing Force energy from various locations across the galaxy via a scepter. Along with others, Katarn and his students embark on a number of missions in an effort to discover what the cult are hoping to do with the powers they steal. During one such mission, while investigating the ruins of the planet formerly known as Byss, Rosh is captured and converted to the Dark Side by the cult's leader, Tavion (Desann's former apprentice). Jaden and Katarn escape and conclude that Tavion is storing the stolen Dark Force energy in the scepter in order to use it to resurrect an ancient Sith master, Marka Ragnos.[13] After receiving a distress message from Rosh, who has returned to the Light Side and is now a prisoner, Katarn and Jaden go to rescue him, only to discover that the distress signal was a scheme to lure the two in. After defeating Rosh, Jaden is confronted with a choice: kill him and turn to the Dark Side, or spare him and remain on the Light Side. If the player kills Rosh, the game ends with Jaden killing Tavion, taking the sceptre and fleeing, with Katarn heading out in pursuit. If the player chooses not to kill Rosh, the game ends with Jaden killing Tavion and defeating the spirit of Ragnos.[14]
Star Wars literature[edit]
In The New Jedi Order series of novels, Katarn becomes the Jedi Academy's foremost battlemaster, a close friend of Luke Skywalker, and a respected Jedi Master. During the Yuuzhan Vong invasion, Katarn helps develop strategies to use against the invaders, and participates in the rescue of human captives from the Imperial Remnant world Ord Sedra. Near the end of the war, the living planet Zonama Sekot agrees to help the Republic; Katarn is one of several Jedi Knights who bonds to seed-partners and is provided with Sekotan starships to use in Sekot's defence.[15]
During Troy Denning's The Dark Nest trilogy (The Joiner King, The Unseen Queen and The Swarm War), Katarn is one of four Jedi Masters who attempts to destroy the Dark Nest. Katarn also speaks his mind during a Master's Council session, where he stands up to Chief of State Cal Omas. He, along with Corran Horn and other Masters, believe that Jaina Solo and Zekk could be the next leaders of the Dark Nest. In The Swarm War (the final part of the trilogy), Katarn leads a squadron of Jedi Stealth X's against the Killiks.[16]
Katarn also appears in Karen Traviss' Legacy of the Force novels Bloodlines, Sacrifice, Exile and Fury, as a Jedi Master participating in Council meetings. In Bloodlines, he helps to point out the "embarrassment" to the Jedi Order of Jacen Solo's actions in apprehending Corellians on Coruscant.[17] In Exile, he plays devil's advocate regarding Leia Organa's supposed betrayal of the Galactic Alliance, although he reasserts his loyalty to Leia by being the first to formally declare his faith in her at the meeting's conclusion.[18] Katarn plays a much larger role in Fury, leading a team of Jedi against Jacen Solo in a capture-or-kill mission. After a fierce four-way lightsaber duel, Katarn is severely wounded and the mission ends in failure.[19]
Other appearances[edit]
Katarn's adventures are also told in three hardcover graphic story albums written by William C. Dietz, which were adapted into audio dramatizations; Soldier for the Empire, Rebel Agent and Jedi Knight.[3][15]
Katarn also appears in the Star Wars Roleplaying Game and is a premiere figure of "The New Jedi Order" faction in the Wizards of the Coast Star Wars Miniatures. The Wizards of the Coast web series, The Dark Forces Saga, highlights his background, as well as those of most of the other heroes and villains found in the games.
He also appears in the video game Star Wars: Empire at War, where he can be used in the 'Skirmish' battle mode as a special 'hero' unit. The game is set between Episode III and Episode IV, and, as such, Katarn cannot use force powers.[20]
The popularity of characters from Dark Forces resulted in LucasArts licensing toys based on the game. Hasbro produced Kyle Katarn and Dark Trooper toys, which are among the few Expanded Universe items to be turned into action figures.[21]
Development and depiction[edit]
Originally, the protagonist of Dark Forces was to be Luke Skywalker. However the developers of the game realized that this would add constraints to gameplay and storyline, and instead a new character, Kyle Katarn, was created.[3] For Jedi Academy, an early decision made during development was whether or not to have Kyle Katarn as the playable character. This was due to the character already being a powerful Jedi Knight, and, as such, starting off with the force skills would affect the gameplay.[22] To resolve this issue, the developers chose to make the playable character a student in the Jedi Academy. Katarn was then made an instructor in the academy and integral to the plot to ensure that Jedi Academy built upon the existing Jedi Knight series storyline.[22]
Katarn was voiced by Nick Jameson in Star Wars: Dark Forces. He was portrayed by Jason Court in the full motion video sequences of Dark Forces II. The in-game model was modeled after Court to maintain consistency. In Mysteries of Sith, Jedi Outcast and Jedi Academy, Katarn's appearance is exclusively a polygonal model, without any FMV scenes, in which he is designed to look like a slightly older Court. In Mysteries of the Sith, he is voiced by Rino Romano, and in the two subsequent games by Jeff Bennett. For the audio dramatizations, he is portrayed by Randal Berger.[23] In Pendant Productions' Blue Harvest, Katarn is voiced by Scott Barry.[24]
Reception[edit]
GameDaily's Robert Workman listed Katarn as one of his favourite Star Wars video game characters.[25] IGN placed him as their 22nd top Star Wars character, praising him as "a gamer's reliable blank state," a feature which they felt made him one of the most "human" Star Wars characters. They also stated that Katarn's endearence with fans was because of his "mishmash of quirks and dispositions."[26] In 2009, IGN's Jesse Schedeen argued that the character should not appear in the then upcoming Star Wars live-action TV series, feeling that "Katarn isn't very interesting without his Jedi abilities," and that deeply exploring his past was not really warranted.[27] Schedeen also included Katarn as one of his favourite Star Wars heroes and video game sword masters.[28][29] In GameSpot's vote for the all time greatest videogame hero, Katarn was eliminated in round two when faced against Lara Croft, garnering 27.5% of the votes.[1] In round one he defeated Dig Dug, with 67.6% of the votes.[30]
On the other hand, GamesRadar was critical of Katarn, calling him the third worst character in video gaming, saying "he's bearded, he's boring, he's bland and his name is Kyle Katarn," comparing his outfit to that of a "beige-obsessed disco cowboy." They also commented that while "originally a genuinely interesting character in the Han Solo mold," they felt that the character had become "emotionless" after he gained force powers.[31]
Fuck yeah, Loved dark forces


Solonoid | Mythic Inconceivable!
 
more |
XBL: Jx493
PSN: Jx493
Steam: Jx493
ID: Solonoid
IP: Logged

13,452 posts
 
Gold is a chemical element with symbol Au and atomic number 79. It is a bright yellow dense, soft, malleable and ductile metal. The properties remain when exposed to air or water. Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements, and is solid under standard conditions. The metal therefore occurs often in free elemental (native) form, as nuggets or grains, in rocks, in veins and in alluvial deposits. It occurs in a solid solution series with the native element silver (as electrum) and also naturally alloyed with copper and palladium. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).

Gold's atomic number of 79 makes it one of the higher atomic number elements that occur naturally in the universe, and is traditionally thought to have been produced in supernova nucleosynthesis to seed the dust from which the Solar System formed. Because the Earth was molten when it was just formed, almost all of the gold present in the Earth sank into the planetary core. Therefore most of the gold that is present today in the Earth's crust and mantle is thought to have been delivered to Earth later, by asteroid impacts during the late heavy bombardment, about 4 billion years ago.

Gold resists attacks by individual acids, but it can be dissolved by aqua regia (nitro-hydrochloric acid), so named because it dissolves gold into a soluble gold tetrachloride cation. Gold compounds also dissolve in alkaline solutions of cyanide, which have been used in mining. It dissolves in mercury, forming amalgam alloys; it is insoluble in nitric acid, which dissolves silver and base metals, a property that has long been used to confirm the presence of gold in items, giving rise to the term acid test.

This metal has been a valuable and highly sought-after precious metal for coinage, jewelry, and other arts since long before the beginning of recorded history. In the past, a gold standard was often implemented as a monetary policy within and between nations, but gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard (see article for details) was finally abandoned for a fiat currency system after 1976. The historical value of gold was rooted in its medium rarity, easy handling and minting, easy smelting, non-corrosiveness, distinct color, and non-reactivity to other elements.

A total of 174,100 tonnes of gold have been mined in human history, according to GFMS as of 2012.[3] This is roughly equivalent to 5.6 billion troy ounces or, in terms of volume, about 9200 m3, or a cube 21 m on a side. The world consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry.[4]

Gold’s high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity have led to its continued use in corrosion resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, colored-glass production, and gold leafing. Certain gold salts are still used as anti-inflammatories in medicine.

Contents

    1 Etymology
    2 Characteristics
        2.1 Color
        2.2 Isotopes
    3 Modern applications
        3.1 Jewelry
        3.2 Investment
        3.3 Electronics connectors
        3.4 Non-electronic industry
        3.5 Commercial chemistry
        3.6 Medicine
        3.7 Food and drink
    4 Monetary exchange (historical)
    5 Cultural history
    6 Occurrence
        6.1 Seawater
        6.2 Specimens of crystalline native gold
    7 Production
    8 Mining
        8.1 Prospecting
    9 Bioremediation
    10 Extraction
        10.1 Refining
    11 Synthesis from other elements
    12 Consumption
    13 Pollution
    14 Chemistry
        14.1 Less common oxidation states
        14.2 Mixed valence compounds
    15 Toxicity
    16 Price
        16.1 History
    17 Symbolism
    18 State emblem
    19 See also
    20 References
    21 Further reading
    22 External links

Etymology

"Gold" is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- ("to shine, to gleam; to be yellow or green").[5][6]

The symbol Au is from the Latin: aurum, the Latin word for "gold".[7] The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning "glow". This word is derived from the same root (Proto-Indo-European *h₂u̯es- "to dawn") as *h₂éu̯sōs, the ancestor of the Latin word Aurora, "dawn".[8] This etymological relationship is presumably behind the frequent claim in scientific publications that aurum meant "shining dawn".[9][10]
Characteristics

Gold is the most malleable of all metals; a single gram can be beaten into a sheet of 1 square meter, or an ounce into 300 square feet. Gold leaf can be beaten thin enough to become transparent. The transmitted light appears greenish blue, because gold strongly reflects yellow and red.[11] Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in visors of heat-resistant suits, and in sun-visors for spacesuits.[12]

Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.[13] Gold is a good conductor of heat and electricity and reflects infrared radiation strongly. Chemically, it is unaffected by air, moisture and most corrosive reagents, and is therefore well suited for use in coins and jewelry and as a protective coating on other, more reactive metals. However, it is not chemically inert. Gold is almost insoluble, but can be dissolved in aqua regia or solutions of sodium or potassium cyanide, for example.

Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.

In addition, gold is very dense, a cubic meter has a mass of 19,300 kg. By comparison, the density of lead is 11,340 kg/m3, and that of the densest element, osmium, is 22,588 ± 15 kg/m3.[14]
Color
Different colors of Ag-Au-Cu alloys

Whereas most other pure metals are gray or silvery white, gold is yellow. This color is determined by the density of loosely bound (valence) electrons; those electrons oscillate as a collective "plasma" medium described in terms of a quasiparticle called plasmon. The frequency of these oscillations lies in the ultraviolet range for most metals, but it falls into the visible range for gold due to subtle relativistic effects that affect the orbitals around gold atoms.[15][16] Similar effects impart a golden hue to metallic caesium.

Common colored gold alloys such as rose gold can be created by the addition of various amounts of copper and silver, as indicated in the triangular diagram to the left. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Less commonly, addition of manganese, aluminium, iron, indium and other elements can produce more unusual colors of gold for various applications.[13]
Isotopes
Main article: Isotopes of gold

Gold has only one stable isotope, 197Au, which is also its only naturally occurring isotope. Thirty-six radioisotopes have been synthesized ranging in atomic mass from 169 to 205. The most stable of these is 195Au with a half-life of 186.1 days. The least stable is 171Au, which decays by proton emission with a half-life of 30 µs. Most of gold's radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are 195Au, which decays by electron capture, and 196Au, which decays most often by electron capture (93%) with a minor β− decay path (7%).[17] All of gold's radioisotopes with atomic masses above 197 decay by β− decay.[18]

At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only 178Au, 180Au, 181Au, 182Au, and 188Au do not have isomers. Gold's most stable isomer is 198m2Au with a half-life of 2.27 days. Gold's least stable isomer is 177m2Au with a half-life of only 7 ns. 184m1Au has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths.[18]
Modern applications

The world consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry.[4]
Jewelry
Main article: Jewelry
Moche gold necklace depicting feline heads. Larco Museum Collection. Lima-Peru

Because of the softness of pure (24k) gold, it is usually alloyed with base metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower carat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper or other base metals or silver or palladium in the alloy. Copper is the most commonly used base metal, yielding a redder color.[19]

Eighteen-carat gold containing 25% copper is found in antique and Russian jewelry and has a distinct, though not dominant, copper cast, creating rose gold. Fourteen-carat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Blue gold can be made by alloying with iron and purple gold can be made by alloying with aluminium, although rarely done except in specialized jewelry. Blue gold is more brittle and therefore more difficult to work with when making jewelry.[19]

Fourteen- and eighteen-carat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. White gold alloys can be made with palladium or nickel. White 18-carat gold containing 17.3% nickel, 5.5% zinc and 2.2% copper is silvery in appearance. Nickel is toxic, however, and its release from nickel white gold is controlled by legislation in Europe.[19]

Alternative white gold alloys are available based on palladium, silver and other white metals,[19] but the palladium alloys are more expensive than those using nickel. High-carat white gold alloys are far more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects.

By 2014 the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report.
Investment
Gold prices (US$ per troy ounce), in nominal US$ and inflation adjusted US$.
Main article: Gold as an investment

Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions. However, economist Martin Feldstein does not believe gold serves as a hedge against inflation or currency depreciation.[20]

The ISO 4217 currency code of gold is XAU.

Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerand, first released in 1967, is also 22k (0.92).[21] The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%.

Several other 99.99% pure gold coins are available. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.
Electronics connectors

Only 10% of the world consumption of new gold produced goes to industry,[4] but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold council, a typical cell phone may contain 50 mg of gold, worth about 50 cents. But since nearly one billion cell phones are produced each year, a gold value of 50 cents in each phone adds to $500 million in gold from just this application.[22]

Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin layer coating electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common.[23]

Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity.[24] Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding.

The concentration of free electrons in gold metal is 5.90×1022 cm−3. Gold is highly conductive to electricity, and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high current silver wires were used in the calutron isotope separator magnets in the project.
Non-electronic industry
Mirror for the future James Webb Space Telescope coated in gold to reflect infrared light
The world's largest gold bar has a mass of 250 kg. Toi museum, Japan.
A gold nugget of 5 mm in diameter (bottom) can be expanded through hammering into a gold foil of about 0.5 square meters. Toi museum, Japan.

    Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, gold solder must match the carat weight of the work, and alloy formulas are manufactured in most industry-standard carat weights to color match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints.
    Gold can be made into thread and used in embroidery.
    Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
    In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride.[25]
    Gold is a good reflector of electromagnetic radiation such as infrared and visible light as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal protection suits and astronauts' helmets and in electronic warfare planes like the EA-6B Prowler.
    Gold is used as the reflective layer on some high-end CDs.
    Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model.[26]
    Gold can be manufactured so thin that it appears transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to deter ice from forming.[27]

Commercial chemistry

Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming.

Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.[28]
Medicine

Gold (usually as the metal) is perhaps the most anciently administered medicine (apparently by shamanic practitioners)[29] and known to Dioscorides,[30][31] apparent paradoxes of the actual toxicology of the substance nevertheless suggests the possibility still of serious gaps in understanding of action on physiology.[32]

In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy. Even some modern esotericists and forms of alternative medicine assign metallic gold a healing power.[33]

Only salts and radioisotopes of gold are of use in standard pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (i.e., ingested gold cannot be attacked by stomach acid). Some gold salts do have anti-inflammatory properties and at present two are still used as pharmaceuticals in the treatment of arthritis and other similar conditions in the US (sodium aurothiomalate and auranofin). These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites.[34]

Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others.

Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells.[35] In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen.[36]

Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Gold's very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope.[37]

The isotope gold-198 (half-life 2.7 days) is used, in nuclear medicine, in some cancer treatments and for treating other diseases.[38][39]
Food and drink

    Gold can be used in food and has the E number 175.[40]
    Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient.[41] Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks, in the form of leaf, flakes or dust, either to demonstrate the host's wealth or in the belief that something that valuable and rare must be beneficial for one's health.
    Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser (English: Goldwater) is a traditional German herbal liqueur[42] produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (~$1000) cocktails which contain flakes of gold leaf.[43] However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered.[44]

Monetary exchange (historical)
Gold is commonly formed into bars for use in monetary exchange.
Two golden 20 kr coins from the Scandinavian Monetary Union, which was based on a gold standard. The coin to the left is Swedish and the right one is Danish.

Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.

The first coins containing gold were struck in Lydia, Asia Minor, around 600 BC.[45] The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams.[46] From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries.[47]

Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies. In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort. Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations.

After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States' refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; it backed 40% of its value until the Swiss joined the International Monetary Fund in 1999.[48]

Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining.[49] With the sharp growth of economies in the 20th century, and increasing foreign exchange, the world's gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1 or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices.

The gold content of alloys is measured in carats (k). Pure gold is designated as 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold,[50] for hardness (American gold coins for circulation after 1837 contained the slightly lower amount of 0.900 fine gold, or 21.6 kt).[51]

Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party.
Cultural history
The Turin Papyrus Map
Funerary mask of Tutankhamun
Jason returns with the golden fleece on an Apulian red-figure calyx krater, ca. 340–330 BC.
Ancient Greek golden decorated crown, funerary or marriage material, 370–360 BC. From a grave in Armento, Campania

Gold artifacts found at the Nahal Kana cave cemetery dated during the 1980s, showed these to be from within the Chalcolithic, and considered the earliest find from the Levant (Gopher et al. 1990).[52] Gold artifacts in the Balkans also appear from the 4th millennium BC, such as those found in the Varna Necropolis near Lake Varna in Bulgaria, thought by one source (La Niece 2009) to be the earliest "well-dated" find of gold artifacts.[53] Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.

Egyptian hieroglyphs from as early as 2600 BC describe gold, which king Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt.[54] Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. The earliest known map is known as the Turin Papyrus Map and shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.

The legend of the golden fleece may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of The Golden Calf and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the world's earliest coinage in Lydia around 610 BC.[55] From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin.

In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León (Spain), where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD.

During Mansa Musa's (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade.[56] A contemporary Arab historian remarked:

    Gold was at a high price in Egypt until they came in that year. The mithqal did not go below 25 dirhams and was generally above, but from that time its value fell and it cheapened in price and has remained cheap till now. The mithqal does not exceed 22 dirhams or less. This has been the state of affairs for about twelve years until this day by reason of the large amount of gold which they brought into Egypt and spent there [...].
    —Chihab Al-Umari, Kingdom of Mali[57]

The Portuguese overseas expansion started in 1415 with the taking of Ceuta, to control the gold trade coming across the desert. Although the caravan trade routes were then diverted, the Portuguese continued expansing southwards along the coast and eventually buying the gold directly (or less indirectly) from the Africans in the Gulf of Guinea.[citation needed]

The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as literally the product of the gods, calling it "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain.[58] However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate.[59] Rumors of cities filled with gold fueled legends of El Dorado.

Gold played a role in western culture, as a cause for desire and of corruption, as told in children's fables like Rumplestiltskin, where the peasant's daughter turns hay into gold, in return for giving up her child when she becomes a princess, and stealing the hen that lays golden eggs in Jack and the beanstalk.

The top prize at the Olympic games is the gold medal.

There is an age-old tradition of biting gold to test its authenticity. Although this is certainly not a professional way of examining gold, the bite test was not to check if the coin was gold (90% gold coins are fairly strong) but to see if the coin was gold plated lead. A lead coin would be very soft and thus teeth marks would result. Fake gold coins were a common problem before 1932 so weighing a coin and also sliding a coin through a "counterfeit detector" slot was common (making a lead coin thicker would add weight thus why slide it through a measured slot). Most establishments (especially US Western saloons) would never accept a gold (or silver) coin of high value before weighing such an item.[citation needed]

75% of all gold ever produced has been extracted since 1910.[60] It has been estimated that all gold ever refined would form a single cube 20 m (66 ft) on a side (equivalent to 8,000 m3).[60]
Sun symbol
Circled dot, the alchemical symbol for gold

One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosopher's stone. Although they never succeeded in this attempt, the alchemists promoted an interest in what can be done with substances, and this laid a foundation for today's chemistry. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun.

Golden treasures have been rumored to be found at various locations, following tragedies such as the Jewish temple treasures in the Vatican, following the temple's destruction in 70 AD, a gold stash on the Titanic, the Nazi gold train – following World War II.

The Dome of the Rock on the Jerusalem temple site is covered with an ultra-thin golden glasure.[clarification needed] The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Budha temple in Thailand has ornamental gold statues walls and roofs. Some European king and queen's crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, Rabbi Akiba's wife asking for a "Jerusalem of Gold" (crown). A Greek burial crown made of gold was found in a grave circa 370 BC.
Occurrence
This 156-troy-ounce (4.9 kg) nugget, known as the Mojave Nugget, was found by an individual prospector in the Southern California Desert using a metal detector.

Gold's atomic number of 79 makes it one of the higher atomic number elements that occur naturally. Although traditionally, gold is thought to have formed by supernova nucleosynthesis.,[61] a new theory suggests that gold and other elements heavier than iron are made by the collision of neutron stars instead.[62][63] Either way, satellite spectrometers in theory detect the resulting gold, "but we have no spectroscopic evidence that [such] elements have truly been produced."[64]

These gold nucleogenesis theories hold that the resulting explosions scattered metal-containing dusts (including heavy elements like gold) into the region of space in which they later condensed into our solar system and the Earth.[65] Because the Earth was molten when it was just formed, almost all of the gold present on Earth sank into the core. Most of the gold that is present today in the Earth's crust and mantle is thought to have been delivered to Earth later, by asteroid impacts during the late heavy bombardment.[66][66][67][68][69][70]
A schematic diagram of a NE (left) to SW (right) cross-section through the 2.020 billion year old Vredefort impact crater in South Africa and how it distorted the contemporary geological structures. The present erosion level is shown. Johannesburg is located where the Witwatersrand Basin (the yellow layer) is exposed at the "present surface" line, just inside the crater rim, on the left. Not to scale.

The asteroid that formed Vredefort crater 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth.[71][72][73][74] However, the gold bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact.[75][76] These gold bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas, and the Transvaal Supergroup of rocks before the meteor struck. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original 300 km diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Nearly 50% of all the gold ever mined on earth has been extracted from these Witwatersrand rocks.[76]

On Earth, gold is found in ores in rock formed from the Precambrian time onward.[53] It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver. Electrum's color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity.

Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "Fool's Gold", which is a pyrite.[77] These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets[53] that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the surface of gold-bearing veins[clarification needed] owing to the oxidation of accompanying minerals followed by weathering, and washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets.
Relative sizes of an 860 kg block of gold ore, and the 30 g of gold that can be extracted from it. Toi gold mine, Japan.
Gold left behind after a pyrite cube was oxidized to hematite. Note cubic shape of cavity.

Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite (Au2Bi) and antimonide aurostibite (AuSb2). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (Cu3Au), novodneprite (AuPb3) and weishanite ((Au, Ag)3Hg2).

Recent research suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.[78]

Another recent study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About 6 miles (10 kilometers) below the surface, under incredible temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces.[79]
Seawater

The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 fmol/L or 10–30 parts per 1,000,000,000,000,000 quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 fmol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 fmol/L) attributed to wind-blown dust and/or rivers. At 10 parts per quadrillion the Earth's oceans would hold 15,000 tonnes of gold.[80] These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data.

A number of people have claimed to be able to economically recover gold from sea water, but so far they have all been either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s. A British fraudster ran the same scam in England in the early 1900s.[81] Fritz Haber (the German inventor of the Haber process) did research on the extraction of gold from sea water in an effort to help pay Germany's reparations following World War I.[82] Based on the published values of 2 to 64 ppb of gold in seawater a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb it became clear that the extraction would not be possible and he stopped the project.[83] No commercially viable mechanism for performing gold extraction from sea water has yet been identified. Gold synthesis is not economically viable and is unlikely to become so in the foreseeable future.
Specimens of crystalline native gold

    Native gold nuggets

    "Rope gold" from Lena River, Sakha Republic, Russia. Size: 2.5×1.2×0.7 cm.

    Crystalline gold from Mina Zapata, Santa Elena de Uairen, Venezuela. Size: 3.7×1.1×0.4 cm.

    Gold leaf from Harvard Mine, Jamestown, California, USA. Size 9.3×3.2× >0.1 cm.

Production
Main article: List of countries by gold production
The entrance to an underground gold mine in Victoria, Australia
Pure gold precipitate produced by the aqua regia refining process
Time trend of gold production

At the end of 2009, it was estimated that all the gold ever mined totaled 165,000 tonnes.[3] This can be represented by a cube with an edge length of about 20.28 meters. At $1,600 per troy ounce, 165,000 metric tonnes of gold would have a value of $8.5 trillion.

World production for 2011 was at 2,700 tonnes, compared to 2,260 tonnes for 2008.

Since the 1880s, South Africa has been the source for a large proportion of the world's gold supply, with about 50% of all gold ever produced having come from South Africa. Production in 1970 accounted for 79% of the world supply, producing about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since 1905 that South Africa has not been the largest.[84]
Mining
Main article: Gold mining

As of 2013, China was the world's leading gold-mining country, followed in order by Australia, the United States, Russia, and Peru. South Africa, which had dominated world gold production for most of the 20th Century, had declined to sixth place.[85] Other major producers are the Ghana, Burkina Faso, Mali, Indonesia and Uzbekistan.

In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina.

Today about one-quarter of the world gold output is estimated to originate from artisanal or small scale mining.[86]

The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest gold deposits the world has ever seen. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a 5–7 km thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces.[87] These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin.[75][87] From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly 4000 m, making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited.[87]

The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa.
Prospecting
Main article: Gold prospecting

During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803.[88] The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega.[89] Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, Australia, Witwatersrand in South Africa, and the Klondike in Canada.
Bioremediation

A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes; such as gold, silver, copper iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.[90]
Extraction
Main article: Gold extraction

Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 mg/kg (0.5 parts per million, ppm) can be economical. Typical ore grades in open-pit mines are 1–5 mg/kg (1–5 ppm); ore grades in underground or hard rock mines are usually at least 3 mg/kg (3 ppm). Because ore grades of 30 mg/kg (30 ppm) are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible.

The average gold mining and extraction costs were about US$317/oz in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.[91]
Refining

After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations.[92][93] Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia.[94]
Synthesis from other elements

Gold was synthesized from mercury by neutron bombardment in 1941, but the isotopes of gold produced were all radioactive.[95] In 1924, a Japanese physicist, Hantaro Nagaoka, accomplished the same feat.[96]

Gold can currently be manufactured in a nuclear reactor by irradiation either of platinum or mercury.

Only the mercury isotope 196Hg, which occurs with a frequency of 0.15% in natural mercury, can be converted to gold by neutron capture, and following electron capture-decay into 197Au with slow neutrons. Other mercury isotopes are converted when irradiated with slow neutrons into one another, or formed mercury isotopes which beta decay into thallium.

Using fast neutrons, the mercury isotope 198Hg, which composes 9.97% of natural mercury, can be converted by splitting off a neutron and becoming 197Hg, which then disintegrates to stable gold. This reaction, however, possesses a smaller activation cross-section and is feasible only with un-moderated reactors.

It is also possible to eject several neutrons with very high energy into the other mercury isotopes in order to form 197Hg. However such high-energy neutrons can be produced only by particle accelerators.[clarification needed]
Consumption

The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry.[4]

According to World Gold Council, China is the world's largest single consumer of gold in 2013 and toppled India for the first time with Chinese consumption increasing by 32 percent in a year, while that of India only rose by 13 percent and world consumption rose by 21 percent. Unlike India where gold is used for mainly for jewellery, China uses gold for manufacturing and retail.[97]
Gold jewelry consumption by country in tonnes[98][99][100] Country    2009    2010    2011    2012    2013
 India    442.37    745.70    986.3    864    974
Greater China    376.96    428.00    921.5    817.5    1120.1
 United States    150.28    128.61    199.5    161    190
 Turkey    75.16    74.07    143    118    175.2
 Saudi Arabia    77.75    72.95    69.1    58.5    72.2
 Russia    60.12    67.50    76.7    81.9    73.3
 United Arab Emirates    67.60    63.37    60.9    58.1    77.1
 Egypt    56.68    53.43    36    47.8    57.3
 Indonesia    41.00    32.75    55    52.3    68
 United Kingdom    31.75    27.35    22.6    21.1    23.4
Other Persian Gulf Countries    24.10    21.97    22    19.9    24.6
 Japan    21.85    18.50    −30.1    7.6    21.3
 South Korea    18.83    15.87    15.5    12.1    17.5
 Vietnam    15.08    14.36    100.8    77    92.2
 Thailand    7.33    6.28    107.4    80.9    140.1
Total    1508.70    1805.60       
Other Countries    251.6    254.0    390.4    393.5    450.7
World Total    1760.3    2059.6    3487.5    3163.6    3863.5
Pollution
Further information: Mercury cycle

Gold production is associated with contribution to hazardous pollution.[101][102] The ore, generally containing less than one ppm gold metal, is ground and mixed with sodium cyanide or mercury to react with gold in the ore for gold separation. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills[103] from gold mines have occurred in both developed and developing countries which killed marine life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters.[104][105] When mercury is used in gold production, minute quantity of mercury compounds reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans causes incurable brain function damage and severe retardation.

Thirty tons of used ore is dumped as waste for producing one troy ounce of gold.[106] Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps are long term, highly hazardous wastes second only to nuclear waste dumps.[106]

Gold extraction is also a highly energy intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires with 25 kW·h of electricity required per gram of gold produced.[107]
Chemistry
Gold (III) chloride solution in water

Although gold is the most noble of the noble metals,[108][109] it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and tertiary phosphines. Au(I) compounds are typically linear. A good example is Au(CN)2−, which is the soluble form of gold encountered in mining. Curiously, aurous complexes of water are rare. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.[110]

Au(III) (auric) is a common oxidation state, and is illustrated by gold(III) chloride, Au2Cl6. The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character.

Aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid, dissolves gold. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming AuCl4− ions, or chloroauric acid, thereby enabling further oxidation.

Some free halogens react with gold.[111] Gold also reacts in alkaline solutions of potassium cyanide. With mercury, it forms an amalgam.
Less common oxidation states

Less common oxidation states of gold include −1, +2, and +5.

The −1 oxidation state occurs in compounds containing the Au− anion, called aurides. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif.[112] Other aurides include those of Rb+, K+, and tetramethylammonium (CH3)4N+.[113] Gold has the highest Pauling electronegativity of any metal, with a value of 2.54, making the auride anion relatively stable.

Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [Au(CH2)2P(C6H5)2]2Cl2. The evaporation of a solution of Au(OH)
3 in concentrated H
2SO
4 produces red crystals of gold(II) sulfate, Au2(SO4)2. Originally thought to be a mixed-valence compound, it has been shown to contain Au4+
2 cations.[114][115] A noteworthy, legitimate gold(II) complex is the tetraxenonogold(II) cation, which contains xenon as a ligand, found in [AuXe4](Sb2F11)2.[116]

Gold pentafluoride, along with its derivative anion, AuF−
6, and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.[117]

Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Mixed valence compounds

Well-defined cluster compounds are numerous.[113] In such cases, gold has a fractional oxidation state. A representative example is the octahedral species {Au(P(C6H5)3)}62+. Gold chalcogenides, such as gold sulfide, feature equal amounts of Au(I) and Au(III).
Toxicity

Pure metallic (elemental) gold is non-toxic and non-irritating when ingested[118] and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body.

Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide.[119][120] Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol.

Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society. Gold contact allergies affect mostly women.[121] Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel.[122]
Price
Further information: Gold as an investment
Gold price history in 1960–2011

Gold is currently valued at around US$62,000 per kilogram.

Like other precious metals, gold is measured by troy weight and by grams. When it is alloyed with other metals the term carat or karat is used to indicate the purity of gold present, with 24 carats being pure gold and lower ratings proportionally less. The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being very pure.
History

The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open.[123]

Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($664.56/kg), but in 1934 the dollar was devalued to $35.00 per troy ounce ($1125.27/kg). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand.[124]

On 17 March 1968, economic circumstances caused the collapse of the gold pool, and a two-tiered pricing scheme was established whereby gold was still used to settle international accounts at the old $35.00 per troy ounce ($1.13/g) but the price of gold on the private market was allowed to fluctuate; this two-tiered pricing system was abandoned in 1975 when the price of gold was left to find its free-market level. Central banks still hold historical gold reserves as a store of value although the level has generally been declining. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3%[125] of the gold ever mined, as does the similarly laden U.S. Bullion Depository at Fort Knox. In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes.[126]

Sometime around 1970 the price began in trend to greatly increase,[127] and since 1968 the price of gold has ranged widely, from a high of $850/oz ($27,300/kg) on 21 January 1980, to a low of $252.90/oz ($8,131/kg) on 21 June 1999 (London Gold Fixing).[128] The period from 1999 to 2001 marked the "Brown Bottom" after a 20-year bear market.[129] Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008 when a new maximum of $865.35 per troy ounce was set.[130] Another record price was set on 17 March 2008 at $1023.50/oz ($32,900/kg).[130]

In late 2009, gold markets experienced renewed momentum upwards due to increased demand and a weakening US dollar. On 2 December 2009, Gold reached a new high closing at $1,217.23.[131] Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset.[132][133] On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East.[134]

Since April 2001 the gold price has more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011,[135] prompting speculation that this long secular bear market has ended and a bull market has returned.[136]
Symbolism
   This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2014)
Gold bars at the Emperor Casino in Macau

Great human achievements are frequently rewarded with gold, in the form of gold medals, golden trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film Awards).

Aristotle in his ethics used gold symbolism when referring to what is now commonly known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the golden rule.

Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. Our most valued or most successful latter years are sometimes considered "golden years". The height of a civilization is referred to as a "golden age".

In some forms of Christianity and Judaism, gold has been associated both with holiness and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Mary and the Christian saints are often golden.

Medieval kings were inaugurated under the signs of sacred oil and a golden crown, the latter symbolizing the eternal shining light of heaven and thus a Christian king's divinely inspired authority.[citation needed]

According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise.[137]

Wedding rings have long been made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and/or the sun and moon and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites.

In popular culture gold has many connotations but is most generally connected to terms such as good or great, such as in the phrases: "has a heart of gold", "that's golden!", "golden moment", "then you're golden!" and "golden boy". It remains a cultural symbol of wealth and through that, in many societies, success.
State emblem

In 1965, the California Legislature designated gold "the State Mineral and mineralogical emblem".[138]

In 1968, the Alaska Legislature named gold "the official state mineral".[139]


 
Mat Cauthon
| Ravens
 
more |
Reported


aREALgod | Legendary Invincible!
 
more |
XBL:
PSN:
Steam:
ID: aTALLmidget
IP: Logged

5,169 posts
 
Mike Quinn
From Wikipedia, the free encyclopedia
For other people named Mike Quinn, see Mike Quinn (disambiguation).
Mike Quinn White American male holding a football in throwing position, wearing a silver helmet with a star on the sides and a white jersey and silver pants
Quinn as a member of the Dallas Cowboys
Quarterback
Personal information
Date of birth: April 15, 1974 (age 40)
Place of birth: Las Vegas, Nevada
Height: 6 ft 4 in (1.93 m)    Weight: 215 lb (98 kg)
Career information
College: Stephen F. Austin State University
Undrafted in 1997
Debuted in 1997 for the Pittsburgh Steelers
Last played in 2006 for the Winnipeg Blue Bombers
Career history

    Pittsburgh Steelers (1997)
    Rhein Fire (1998)
    Indianapolis Colts (1998)
    Dallas Cowboys (1998–1999)
    Miami Dolphins (2000–2001)
    Houston Texans (2002–2003)
    Denver Broncos (2004)*
    Pittsburgh Steelers (2004)*
    Montreal Alouettes (2005)*
    Winnipeg Blue Bombers (2006)

    *Offseason and/or practice squad member only

Career highlights and awards

    All-NFL Europe (1998)
    World Bowl champion (VI)

Career NFL statistics as of 2003
TDs–INTs    1–0
Passing yards    20
QB rating    85.4
Stats at NFL.com
Career CFL statistics as of 2006
TDs–INTs    3–5
Passing yards    355
QB rating    55.8
Stats at CFL.ca

Michael Patrick Quinn (born April 15, 1974 in Las Vegas, Nevada) is a former professional gridiron football quarterback. He was signed by the Pittsburgh Steelers as an undrafted free agent in 1997 and was also a member of the Rhein Fire, Indianapolis Colts, Dallas Cowboys, Miami Dolphins, Houston Texans, Denver Broncos, Montreal Alouettes and Winnipeg Blue Bombers. He played college football at Stephen F. Austin State University.

Quinn attended Lee High School in Houston, Texas and Stephen F. Austin State University.[1] He started playing football in high school and played at university, where he started for one season, his senior season. After he went undrafted in the 1997 NFL Draft he signed with the Pittsburgh Steelers. With Pittsburgh, he ended up making the roster as the third–string quarterback. Following the season, he was allocated to the Rhein Fire, whom he led to the championship game. He spent 1998 and 1999 as a backup for the Indianapolis Colts and Dallas Cowboys before signing with the Miami Dolphins. Quinn spent two seasons backing up for the Dolphins. In 2002, he was one of the first group of players signed by the National Football League (NFL) expansion franchise, the Houston Texans. As he had done for the previous years in his career, Quinn spent two seasons as a backup for the new franchise. The final year of his NFL career was spent with the Denver Broncos in training camp and with the Steelers' practice squad. After going unsigned, Quinn signed with the Montreal Alouettes joining their practice squad in August 2005, leaving the team after the season. Quinn joined another Canadian team, the Winnipeg Blue Bombers in March 2006 but after receiving playing time in a backup role he was released in August.

Contents

    1 Personal
    2 College career
    3 Professional career
        3.1 1997–2003
        3.2 2004–2006
    4 References

Personal

Quinn attended Robert E. Lee High School in Houston, Texas. He was named to the state All-Star team during his tenure. Currently, he and his wife, Jennifer, live in Houston, Texas. At Stephen F. Austin, he majored in accounting.
College career

On November 12, 1995 in a game against Southwest Texas State, Quinn came into the game for starting quarterback James Ritchey and threw three touchdown passes.[2]

In the game against Samford on October 27, 1996, Quinn led Stephen F. Austin to a 43–14 win after throwing a touchdown pass to Chris Jefferson at the end of the first half. SFA held the lead for the rest of the game.[3] Against McNeese State on November 3, Quinn led a come from behind win for SFA by throwing two touchdowns to Mikhael Ricks in the fourth quarter.[4] The next week, Quinn threw for 283 yards and threw four touchdown passes to lead Stephen F. Austin to another win making them 7–2.[5] However, against Southwest Texas State on November 17, Quinn threw 23 incomplete passes.[6]
Professional career
1997–2003

Quinn signed with the Pittsburgh Steelers as an undrafted free agent following the 1997 NFL Draft.[7] Quinn entered training camp behind Kordell Stewart, Mike Tomczak and Jim Miller on the depth chart, but after training camp Quinn had beaten out Miller and became the team's third-string quarterback.[8] He saw his only game action[9] on November 9 against the Baltimore Ravens, throwing for 10 yards on one completion.[10] Following the 1997 season, the Steelers allocated Quinn to play in NFL Europe,[11] he later agreed to play for the Rhein Fire.[12] In his second game with the Fire on April 12, Quinn completed 13 of 21 passes for 194 yards. He also completed two touchdown passes.[13] With Quinn as the starting quarterback, the Fire played in the World Bowl. However, Quinn was hampered by a sprained ankle and could not play in the game.[14][15] He returned to the Steelers after the NFL Europe season but was waived on August 31.[16]

After being waived by Pittsburgh, Quinn was claimed off waivers by the Indianapolis Colts on September 1. To make room for Quinn the Colts had to release Jim Miller, who had lost a roster spot on the Steelers to Quinn a year earlier.[17] However, after signing Doug Nussmeier, the Colts waived Quinn.[18]

The Dallas Cowboys, who were unsuccessful claiming Quinn 10 days earlier,[17] claimed him after he was waived by the Colts.[19] In Dallas, Quinn became the Cowboys second-string quarterback after Troy Aikman was injured and Jason Garrett became the starter.[20] He played in three games for the Cowboys in 1998, completing one pass for 10 yards. In 1999, Quinn did not play in a game for Dallas.[9] During the 2000 off–season, Garrett signed as a free agent with the New York Giants[21] and quarterback Paul Justin was signed by Dallas to compete for the backup spot with Quinn.[22] He was released on May 5, 2000.[23]

Quinn signed with the Miami Dolphins on May 23, 2000.[24] On November 6, Quinn threw a touchdown pass to Deon Dyer[25] but was waived by the Dolphins on November 10,[26] only to be re-signed four days later.[27] In the 2001 preseason, Quinn sprained a joint in his shoulder and was waived/injured.[28] He was released from injured reserve with an injury settlement on September 6.[29]

The Houston Texans, the newest franchise in the NFL, signed Quinn to a reserve/future contract on December 30, 2001.[30] Following the 2002 NFL Draft in which the Texans drafted quarterback David Carr with their first ever pick, Quinn became the backup.[31] Quinn and Tony Banks ended up winning the backup jobs to Carr[32] over Kent Graham and Ben Sankey.[33] Banks was second–string with Quinn being the third–string quarterback.[34] The Texans waived Quinn during final cuts on August 25, 2003. He was the final member of the Texans first signings still on the team.[35] He was re-signed to the practice squad on November 17 after David Carr suffered a sprained right shoulder.[36] However, when Banks also became injured, Quinn was signed from the practice squad to back up the now healthy Carr and rookie Dave Ragone.[37]
2004–2006

The Denver Broncos signed Quinn as an unrestricted free agent in March 2004.[38] At the end of training camp, Quinn was released by the Broncos.[39]

Quinn was re-signed by the Steelers on September 22 and assigned to their practice squad.[40] He was released from the practice squad on November 10.[41]

Quinn was signed to the Montreal Alouettes practice roster on August 29, 2005.[42]

The Winnipeg Blue Bombers signed Quinn on March 22, 2006 joining quarterbacks Tee Martin, Russ Michna and Kevin Glenn on Winnipeg's roster.[43] In his CFL preseason debut against the Montreal Alouettes on June 2, Quinn threw a 24 yard touchdown pass to Quentin McCord however the Blue Bombers lost 25–24.[44] After making the team out of training camp, Quinn injured his sternum and shoulder which caused him to miss three weeks.[45] In his first week back with Winnipeg, Quinn was forced into the starting role after Kevin Glenn suffered a knee injury.[46] However, a string of poor performances which included an interception in the end zone while Winnipeg was in field goal position led to his release on August 28.[47]
References

    Brown, Chip. "TEXANS SIDELINE." The Dallas Morning News. September 4, 2002. Retrieved on February 5, 2011. "Mike Quinn: A product of Robert E. Lee High School in Houston and Stephen F. Austin,"
    "Stephen F. Austin handles SWT". Austin American-Statesman. November 12, 1995. Retrieved 2009-09-19.
    "Marshall toughens in Second Half". South Florida Sun-Sentinel. October 27, 1996. Retrieved 2009-09-19.
    "SFA rallies by McNeese". Baton Rouge Advocate. November 3, 1996. Retrieved 2009-09-19.
    Wire, From (November 10, 1996). "Austin College upsets Howard Payne". Dallas Morning News. Retrieved 2009-09-19.
    Date, BILL MARTIN (November 17, 1996). "Mathis explodes for 310". Austin American-Statesman. Retrieved 2009-09-19.
    "Why ask why? Questions linger after Woodson's Steelers career apparently ends". Pittsburgh Post-Gazette. April 22, 1997. Retrieved 2009-09-19.
    "Steelers cut Miller Series: NFL". The St. Petersburg Times. August 24, 1997. Retrieved 2009-09-19.
    "Mike Quinn". NFL.com. Retrieved 2009-09-19.
    "Box Score: Baltimore Ravens at Pittsburgh Steelers". Sports Illustrated. November 9, 1997. Retrieved 2009-09-19.
    "NFL Europe Allocation Draft". USA Today. February 18, 1998. Retrieved 2009-09-19.
    "Steelers expecting to lose Thigpen, Jackson likely out, too". Pittsburgh Post-Gazette. February 14, 1998. Retrieved 2009-09-19.
    "It's all up to Elway, even deal with 49ers". San Diego Union-Tribune. April 12, 1998. Retrieved 2009-09-19.
    "World Bowl May Be Decided by Second Fiddles". The Washington Post. June 14, 1998. Retrieved 2009-09-19.
    "Sports Briefly Substitute QB helps Fire win World Bowl". Fort Worth Star Telegram. June 15, 1998. Retrieved 2009-09-19.
    "Transactions". The New York Times. August 31, 1998. Retrieved 2009-09-19.
    "Quick slants". Pittsburgh Post-Gazette. September 1, 1998. Retrieved 2009-09-19.
    "Transactions". The Hartford Courant. September 10, 1998. Retrieved 2009-09-19.
    "The slighted Quinn". Pittsburgh Post-Gazette. Retrieved 2009-09-19.[dead link]
    Smith, Timothy W. (September 16, 1998). "Cowboys Rallying Round Garrett". The New York Times. Retrieved 2009-09-19.
    Taylor, Jean-Jacques (February 23, 2000). "Garrett leaving Cowboys to become NY Giants' backup QB". Dallas Morning News. Retrieved 2009-09-19.
    Moore, David (March 29, 2000). "Cowboys near deal with backup QB Justin". Dallas Morning News. Retrieved 2009-09-19.
    "Cowboys release QB Mike Quinn". Associated Press. May 5, 2000. Retrieved 2009-09-19.
    "Dolphins sign veteran QB Quinn". Associated Press. May 23, 2000. Retrieved 2009-09-19.


aREALgod | Legendary Invincible!
 
more |
XBL:
PSN:
Steam:
ID: aTALLmidget
IP: Logged

5,169 posts
 
String theory
From Wikipedia, the free encyclopedia
  (Redirected from String Theory)
For a more accessible and less technical introduction to this topic, see Introduction to M-theory.
String theory
Calabi-Yau-alternate.png
Fundamental objects

    String
    Brane
    D-brane

Perturbative theory

    Bosonic
    Superstring

    Type I
    Type II (IIA / IIB)

    Heterotic (SO(32) · E8×E8)

Non-perturbative results

    S-duality
    T-duality
    M-theory

    AdS/CFT correspondence

Phenomenology

    Phenomenology
    Cosmology

    Landscape

Mathematics

    Mirror symmetry
    Vertex operator algebras

Related concepts[show]
Theorists[show]

    History
    Glossary

    v
    t
    e

In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings.[1] String theory aims to explain all types of observed elementary particles using quantum states of these strings. In addition to the particles postulated by the standard model of particle physics, string theory naturally incorporates gravity and so is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Besides this potential role, string theory is now widely used as a theoretical tool and has shed light on many aspects of quantum field theory and quantum gravity.[2]

The earliest version of string theory, bosonic string theory, incorporated only the class of particles known as bosons. It was then developed into superstring theory, which posits that a connection – a "supersymmetry" – exists between bosons and the class of particles called fermions. String theory requires the existence of extra spatial dimensions for its mathematical consistency. In realistic physical models constructed from string theory, these extra dimensions are typically compactified to extremely small scales.

String theory was first studied in the late 1960s[3] as a theory of the strong nuclear force before being abandoned in favor of the theory of quantum chromodynamics. Subsequently, it was realized that the very properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. Five consistent versions of string theory were developed until it was realized in the mid-1990s that they were different limits of a conjectured single 11-dimensional theory now known as M-theory.[4]

Many theoretical physicists, including Stephen Hawking, Edward Witten and Juan Maldacena, believe that string theory is a step towards the correct fundamental description of nature: it accommodates a consistent combination of quantum field theory and general relativity, agrees with insights in quantum gravity (such as the holographic principle and black hole thermodynamics) and has passed many non-trivial checks of its internal consistency.[citation needed] According to Hawking, "M-theory is the only candidate for a complete theory of the universe."[5] Other physicists, such as Richard Feynman,[6][7] Roger Penrose[8] and Sheldon Lee Glashow,[9] have criticized string theory for not providing novel experimental predictions at accessible energy scales.

Contents

    1 Overview
        1.1 Strings
        1.2 Branes
        1.3 Dualities
            1.3.1 S-, T-, and U-duality
            1.3.2 M-theory
        1.4 Extra dimensions
            1.4.1 Number of dimensions
            1.4.2 Compact dimensions
            1.4.3 Brane-world scenario
            1.4.4 Effect of the hidden dimensions
    2 Testability and experimental predictions
        2.1 String harmonics
        2.2 Cosmology
        2.3 Supersymmetry
    3 AdS/CFT correspondence
        3.1 Examples of the correspondence
        3.2 Applications to quantum chromodynamics
        3.3 Applications to condensed matter physics
    4 Connections to mathematics
        4.1 Mirror symmetry
        4.2 Vertex operator algebras
    5 History
        5.1 Early results
        5.2 First superstring revolution
        5.3 Second superstring revolution
    6 Criticisms
        6.1 High energies
        6.2 Number of solutions
        6.3 Background independence
    7 See also
    8 References
    9 Further reading
        9.1 Popular books
            9.1.1 General
            9.1.2 Critical
        9.2 Textbooks
            9.2.1 For physicists
            9.2.2 For mathematicians
        9.3 Online material
    10 External links

Overview
Levels of magnification:
1. Macroscopic level: Matter
2. Molecular level
3. Atomic level: Protons, neutrons, and electrons
4. Subatomic level: Electron
5. Subatomic level: Quarks
6. String level

The starting point for string theory is the idea that the point-like particles of elementary particle physics can also be modeled as one-dimensional objects called strings. According to string theory, strings can oscillate in many ways. On distance scales larger than the string radius, each oscillation mode gives rise to a different species of particle, with its mass, charge, and other properties determined by the string's dynamics. Splitting and recombination of strings correspond to particle emission and absorption, giving rise to the interactions between particles. An analogy for strings' modes of vibration is a guitar string's production of multiple distinct musical notes.[clarification needed] In this analogy, different notes correspond to different particles.

In string theory, one of the modes of oscillation of the string corresponds to a massless, spin-2 particle. Such a particle is called a graviton since it mediates a force which has the properties of gravity. Since string theory is believed to be a mathematically consistent quantum mechanical theory, the existence of this graviton state implies that string theory is a theory of quantum gravity.

String theory includes both open strings, which have two distinct endpoints, and closed strings, which form a complete loop. The two types of string behave in slightly different ways, yielding different particle types. For example, all string theories have closed string graviton modes, but only open strings can correspond to the particles known as photons. Because the two ends of an open string can always meet and connect, forming a closed string, all string theories contain closed strings.

The earliest string model, the bosonic string, incorporated only the class of particles known as bosons. This model describes, at low enough energies, a quantum gravity theory, which also includes (if open strings are incorporated as well) gauge bosons such as the photon. However, this model has problems. What is most significant is that the theory has a fundamental instability, believed to result in the decay (at least partially) of spacetime itself. In addition, as the name implies, the spectrum of particles contains only bosons, particles which, like the photon, obey particular rules of behavior. Roughly speaking, bosons are the constituents of radiation, but not of matter, which is made of fermions. Investigating how a string theory may include fermions led to the invention of supersymmetry, a mathematical relation between bosons and fermions. String theories that include fermionic vibrations are now known as superstring theories; several kinds have been described, but all are now thought to be different limits of a theory called M-theory.

Since string theory incorporates all of the fundamental interactions, including gravity, many physicists hope that it fully describes our universe, making it a theory of everything. One of the goals of current research in string theory is to find a solution of the theory that is quantitatively identical with the standard model, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. It is not yet known whether string theory has such a solution, nor is it known how much freedom the theory allows to choose the details.

One of the challenges of string theory is that the full theory does not yet have a satisfactory definition in all circumstances. The scattering of strings is most straightforwardly defined using the techniques of perturbation theory, but it is not known in general how to define string theory nonperturbatively. It is also not clear as to whether there is any principle by which string theory selects its vacuum state, the spacetime configuration that determines the properties of our universe (see string theory landscape).
Strings

The motion of a point-like particle can be described by drawing a graph of its position with respect to time. The resulting picture depicts the worldline of the particle in spacetime. In an analogous way, one can draw a graph depicting the progress of a string as time passes. The string, which looks like a small line by itself, will sweep out a two-dimensional surface known as the worldsheet. The different string modes (giving rise to different particles, such as the photon or graviton) appear as waves on this surface.

A closed string looks like a small loop, so its worldsheet will look like a pipe. An open string looks like a segment with two endpoints, so its worldsheet will look like a strip. In a more mathematical language, these are both Riemann surfaces, the strip having a boundary and the pipe none.
Interaction in the subatomic world: world lines of point-like particles in the Standard Model or a world sheet swept up by closed strings in string theory

Strings can join and split. This is reflected by the form of their worldsheet, or more precisely, by its topology. For example, if a closed string splits, its worldsheet will look like a single pipe splitting into two pipes. This topology is often referred to as a pair of pants (see drawing at right). If a closed string splits and its two parts later reconnect, its worldsheet will look like a single pipe splitting to two and then reconnecting, which also looks like a torus connected to two pipes (one representing the incoming string, and the other representing the outgoing one). An open string doing the same thing will have a worldsheet that looks like an annulus connected to two strips.

In quantum mechanics, one computes the probability for a point particle to propagate from one point to another by summing certain quantities called probability amplitudes. Each amplitude is associated with a different worldline of the particle. This process of summing amplitudes over all possible worldlines is called path integration. In string theory, one computes probabilities in a similar way, by summing quantities associated with the worldsheets joining an initial string configuration to a final configuration. It is in this sense that string theory extends quantum field theory, replacing point particles by strings. As in quantum field theory, the classical behavior of fields is determined by an action functional, which in string theory can be either the Nambu–Goto action or the Polyakov action.
Branes
Main articles: Brane and D-brane

In string theory and related theories such as supergravity theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions.[10] For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane.

Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They have mass and can have other attributes such as charge. A p-brane sweeps out a (p+1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane.

In string theory, D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to the fact that we impose a certain mathematical condition on the system known as the Dirichlet boundary condition. The study of D-branes in string theory has led to important results such as the AdS/CFT correspondence, which has shed light on many problems in quantum field theory.

Branes are also frequently studied from a purely mathematical point of view[11] since they are related to subjects such as homological mirror symmetry and noncommutative geometry. Mathematically, branes may be represented as objects of certain categories, such as the derived category of coherent sheaves on a Calabi–Yau manifold, or the Fukaya category.
Dualities

In physics, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena.

In addition to providing a candidate for a theory of everything, string theory provides many examples of dualities between different physical theories and can therefore be used as a tool for understanding the relationships between these theories.[12]
S-, T-, and U-duality
Main articles: S-duality, T-duality and U-duality

These are dualities between string theories which relate seemingly different quantities. Large and small distance scales, as well as strong and weak coupling strengths, are quantities that have always marked very distinct limits of behavior of a physical system in both classical and quantum physics. But strings can obscure the difference between large and small, strong and weak, and this is how these five very different theories end up being related. T-duality relates the large and small distance scales between string theories, whereas S-duality relates strong and weak coupling strengths between string theories. U-duality links T-duality and S-duality.
M-theory
Main article: M-theory

Before the 1990s, string theorists believed there were five distinct superstring theories: type I, type IIA, type IIB, and the two flavors of heterotic string theory (SO(32) and E8×E8). The thinking was that out of these five candidate theories, only one was the actual correct theory of everything, and that theory was the one whose low energy limit, with ten spacetime dimensions compactified down to four, matched the physics observed in our world today. It is now believed that this picture was incorrect and that the five superstring theories are related to one another by the dualities described above. The existence of these dualities suggests that the five string theories are in fact special cases of a more fundamental theory called M-theory.[13]
String theory details by type and number of spacetime dimensions Type    Spacetime dimensions    Details
Bosonic    26    Only bosons, no fermions, meaning only forces, no matter, with both open and closed strings; major flaw: a particle with imaginary mass, called the tachyon, representing an instability in the theory.
I    10    Supersymmetry between forces and matter, with both open and closed strings; no tachyon; gauge group is SO(32)
IIA    10    Supersymmetry between forces and matter, with only closed strings; no tachyon; massless fermions are non-chiral
IIB    10    Supersymmetry between forces and matter, with only closed strings; no tachyon; massless fermions are chiral
HO    10    Supersymmetry between forces and matter, with closed strings only; no tachyon; heterotic, meaning right moving and left moving strings differ; gauge group is SO(32)
HE    10    Supersymmetry between forces and matter, with closed strings only; no tachyon; heterotic; gauge group is E8×E8
Extra dimensions
Number of dimensions

An intriguing feature of string theory is that it predicts extra dimensions. In classical string theory the number of dimensions is not fixed by any consistency criterion. However, to make a consistent quantum theory, string theory is required to live in a spacetime of the so-called "critical dimension": we must have 26 spacetime dimensions for the bosonic string and 10 for the superstring. This is necessary to ensure the vanishing of the conformal anomaly of the worldsheet conformal field theory. Modern understanding indicates that there exist less trivial ways of satisfying this criterion. Cosmological solutions exist in a wider variety of dimensionalities, and these different dimensions are related by dynamical transitions. The dimensions are more precisely different values of the "effective central charge", a count of degrees of freedom that reduces to dimensionality in weakly curved regimes.[14][15]

One such theory is the 11-dimensional M-theory, which requires spacetime to have eleven dimensions,[16] as opposed to the usual three spatial dimensions and the fourth dimension of time. The original string theories from the 1980s describe special cases of M-theory where the eleventh dimension is a very small circle or a line, and if these formulations are considered as fundamental, then string theory requires ten dimensions. But the theory also describes universes like ours, with four observable spacetime dimensions, as well as universes with up to 10 flat space dimensions, and also cases where the position in some of the dimensions is described by a complex number rather than a real number. The notion of spacetime dimension is not fixed in string theory: it is best thought of as different in different circumstances.[17]

Nothing in Maxwell's theory of electromagnetism or Einstein's theory of relativity makes this kind of prediction; these theories require physicists to insert the number of dimensions manually and arbitrarily, and this number is fixed and independent of potential energy. String theory allows one to relate the number of dimensions to scalar potential energy. In technical terms, this happens because a gauge anomaly exists for every separate number of predicted dimensions, and the gauge anomaly can be counteracted by including nontrivial potential energy into equations to solve motion. Furthermore, the absence of potential energy in the "critical dimension" explains why flat spacetime solutions are possible.

This can be better understood by noting that a photon included in a consistent theory (technically, a particle carrying a force related to an unbroken gauge symmetry) must be massless. The mass of the photon that is predicted by string theory depends on the energy of the string mode that represents the photon. This energy includes a contribution from the Casimir effect, namely from quantum fluctuations in the string. The size of this contribution depends on the number of dimensions, since for a larger number of dimensions there are more possible fluctuations in the string position. Therefore, the photon in flat spacetime will be massless—and the theory consistent—only for a particular number of dimensions.[18] When the calculation is done, the critical dimensionality is not four as one may expect (three axes of space and one of time). The subset of X is equal to the relation of photon fluctuations in a linear dimension. Flat space string theories are 26-dimensional in the bosonic case, while superstring and M-theories turn out to involve 10 or 11 dimensions for flat solutions. In bosonic string theories, the 26 dimensions come from the Polyakov equation.[19] Starting from any dimension greater than four, it is necessary to consider how these are reduced to four-dimensional spacetime.
Compact dimensions
Calabi–Yau manifold (3D projection)

Two ways have been proposed to resolve this apparent contradiction. The first is to compactify the extra dimensions; i.e., the 6 or 7 extra dimensions are so small as to be undetectable by present-day experiments.

To retain a high degree of supersymmetry, these compactification spaces must be very special, as reflected in their holonomy. A 6-dimensional manifold must have SU(3) structure, a particular case (torsionless) of this being SU(3) holonomy, making it a Calabi–Yau space, and a 7-dimensional manifold must have G2 structure, with G2 holonomy again being a specific, simple, case. Such spaces have been studied in attempts to relate string theory to the 4-dimensional Standard Model, in part due to the computational simplicity afforded by the assumption of supersymmetry. More recently, progress has been made constructing more realistic compactifications without the degree of symmetry of Calabi–Yau or G2 manifolds.[citation needed]

A standard analogy for this is to consider multidimensional space as a garden hose. If the hose is viewed from sufficient distance, it appears to have only one dimension, its length. Indeed, think of a ball just small enough to enter the hose. Throwing such a ball inside the hose, the ball would move more or less in one dimension; in any experiment we make by throwing such balls in the hose, the only important movement will be one-dimensional, that is, along the hose. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling inside it would move in two dimensions (and a fly flying in it would move in three dimensions). This "extra dimension" is only visible within a relatively close range to the hose, or if one "throws in" small enough objects. Similarly, the extra compact dimensions are only "visible" at extremely small distances, or by experimenting with particles with extremely small wavelengths (of the order of the compact dimension's radius), which in quantum mechanics means very high energies (see wave–particle duality).
Brane-world scenario

Another possibility is that we are "stuck" in a 3+1 dimensional (three spatial dimensions plus one time dimension) subspace of the full universe. Properly localized matter and Yang–Mills gauge fields will typically exist if the sub-spacetime is an exceptional set of the larger universe.[20] These "exceptional sets" are ubiquitous in Calabi–Yau n-folds and may be described as subspaces without local deformations, akin to a crease in a sheet of paper or a crack in a crystal, the neighborhood of which is markedly different from the exceptional subspace itself. However, until the work of Randall and Sundrum,[21] it was not known that gravity can be properly localized to a sub-spacetime. In addition, spacetime may be stratified, containing strata of various dimensions, allowing us to inhabit the 3+1-dimensional stratum—such geometries occur naturally in Calabi–Yau compactifications.[22] Such sub-spacetimes are D-branes, hence such models are known as brane-world scenarios.
Effect of the hidden dimensions

In either case, gravity acting in the hidden dimensions affects other non-gravitational forces such as electromagnetism. In fact, Kaluza's early work demonstrated that general relativity in five dimensions actually predicts the existence of electromagnetism. However, because of the nature of Calabi–Yau manifolds, no new forces appear from the small dimensions, but their shape has a profound effect on how the forces between the strings appear in our four-dimensional universe. In principle, therefore, it is possible to deduce the nature of those extra dimensions by requiring consistency with the standard model, but this is not yet a practical possibility. It is also possible to extract information regarding the hidden dimensions by precision tests of gravity, but so far these have only put upper limitations on the size of such hidden dimensions.
Testability and experimental predictions

Although a great deal of recent work has focused on using string theory to construct realistic models of particle physics, several major difficulties complicate efforts to test models based on string theory. The most significant is the extremely small size of the Planck length, which is expected to be close to the string length (the characteristic size of a string, where strings become easily distinguishable from particles). Another issue is the huge number of metastable vacua of string theory, which might be sufficiently diverse to accommodate almost any phenomena we might observe at lower energies.
String harmonics

One unique prediction of string theory is the existence of string harmonics. At sufficiently high energies, the string-like nature of particles would become obvious. There should be heavier copies of all particles, corresponding to higher vibrational harmonics of the string. It is not clear how high these energies are. In most conventional string models, they would be close to the Planck energy, which is 1014 times higher than the energies accessible in the newest particle accelerator, the LHC, making this prediction impossible to test with any particle accelerator in the near future. However, in models with large extra dimensions they could potentially be produced at the LHC, or at energies not far above its reach.
Cosmology

String theory as currently understood makes a series of predictions for the structure of the universe at the largest scales. Many phases in string theory have very large, positive vacuum energy.[23] Regions of the universe that are in such a phase will inflate exponentially rapidly in a process known as eternal inflation. As such, the theory predicts that most of the universe is very rapidly expanding. However, these expanding phases are not stable, and can decay via the nucleation of bubbles of lower vacuum energy. Since our local region of the universe is not very rapidly expanding, string theory predicts we are inside such a bubble. The spatial curvature of the "universe" inside the bubbles that form by this process is negative, a testable prediction.[24] Moreover, other bubbles will eventually form in the parent vacuum outside the bubble and collide with it. These collisions lead to potentially observable imprints on cosmology.[25] However, it is possible that neither of these will be observed if the spatial curvature is too small and the collisions are too rare.

Under certain circumstances, fundamental strings produced at or near the end of inflation can be "stretched" to astronomical proportions. These cosmic strings could be observed in various ways, for instance by their gravitational lensing effects. However, certain field theories also predict cosmic strings arising from topological defects in the field configuration.[26]
Supersymmetry
Main article: Supersymmetry

If confirmed experimentally, supersymmetry is often considered circumstantial evidence, because most consistent string theories are space-time supersymmetric. As with other physical theories, the existence of space-time supersymmetry is a desired feature addressing various issues we encounter in non-supersymmetric theories, like in the Standard Model. However, the absence of supersymmetric particles at energies accessible to the LHC will not actually disprove string theory, since the energy scale at which supersymmetry is broken could be well above the accelerator's range. This would make supersymmetric particles too heavy to be produced in relatively lower energies. On the other hand, there are fully consistent non-supersymmetric string-theories that can also provide phenomenologically relevant predictions.
AdS/CFT correspondence
Main article: AdS/CFT correspondence

The anti-de Sitter/conformal field theory (AdS/CFT) correspondence is a relationship which says that string theory is in certain cases equivalent to a quantum field theory. More precisely, one considers string or M-theory on an anti-de Sitter background. This means that the geometry of spacetime is obtained by perturbing a certain solution of Einstein's equation in the vacuum. In this setting, it is possible to define a notion of "boundary" of spacetime. The AdS/CFT correspondence states that this boundary can be regarded as the "spacetime" for a quantum field theory, and this field theory is equivalent to the bulk gravitational theory in the sense that there is a "dictionary" for translating calculations in one theory into calculations in the other.
Examples of the correspondence

The most famous example of the AdS/CFT correspondence states that Type IIB string theory on the product AdS5 × S5 is equivalent to N = 4 super Yang–Mills theory on the four-dimensional conformal boundary.[27][28][29][30] Another realization of the correspondence states that M-theory on AdS4 × S7 is equivalent to the ABJM superconformal field theory in three dimensions.[31] Yet another realization states that M-theory on AdS7 × S4is equivalent to the so-called (2,0)-theory in six dimensions.[32]
Applications to quantum chromodynamics
Main article: AdS/QCD

Since it relates string theory to ordinary quantum field theory, the AdS/CFT correspondence can be used as a theoretical tool for doing calculations in quantum field theory. For example, the correspondence has been used to study the quark–gluon plasma, an exotic state of matter produced in particle accelerators.

The physics of the quark–gluon plasma is governed by quantum chromodynamics, the fundamental theory of the strong nuclear force, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In order to understand certain properties of the quark–gluon plasma, theorists have therefore made use of the AdS/CFT correspondence. One version of this correspondence relates string theory to a certain supersymmetric gauge theory called N = 4 super Yang–Mills theory. The latter theory provides a good approximation to quantum chromodynamics. One can thus translate problems involving the quark–gluon plasma into problems in string theory which are more tractable. Using these methods, theorists have computed the shear viscosity of the quark–gluon plasma.[33] In 2008, these predictions were confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.[34]
Applications to condensed matter physics

In addition, string theory methods have been applied to problems in condensed matter physics. Certain condensed matter systems are difficult to understand using the usual methods of quantum field theory, and the AdS/CFT correspondence may allow physicists to better understand these systems by describing them in the language of string theory. Some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator.[35][36]
Connections to mathematics

In addition to influencing research in theoretical physics, string theory has stimulated a number of major developments in pure mathematics. Like many developing ideas in theoretical physics, string theory does not at present have a mathematically rigorous formulation in which all of its concepts can be defined precisely. As a result, physicists who study string theory are often guided by physical intuition to conjecture relationships between the seemingly different mathematical structures that are used to formalize different parts of the theory. These conjectures are later proved by mathematicians, and in this way, string theory has served as a source of new ideas in pure mathematics.[37]
Mirror symmetry
Main article: Mirror symmetry (string theory)

One of the ways in which string theory influenced mathematics was through the discovery of mirror symmetry. In string theory, the shape of the unobserved spatial dimensions is typically encoded in mathematical objects called Calabi–Yau manifolds. These are of interest in pure mathematics, and they can be used to construct realistic models of physics from string theory. In the late 1980s, it was noticed that given such a physical model, it is not possible to uniquely reconstruct a corresponding Calabi–Yau manifold. Instead, one finds that there are two Calabi–Yau manifolds that give rise to the same physics. These manifolds are said to be "mirror" to one another. The existence of this mirror symmetry relationship between different Calabi–Yau manifolds has significant mathematical consequences as it allows mathematicians to solve many problems in enumerative algebraic geometry. Today mathematicians are still working to develop a mathematical understanding of mirror symmetry based on physicists' intuition.[38]
Vertex operator algebras
Main articles: Vertex operator algebra and Monstrous moonshine

In addition to mirror symmetry, applications of string theory to pure mathematics include results in the theory of vertex operator algebras. For example, ideas from string theory were used by Richard Borcherds in 1992 to prove the monstrous moonshine conjecture relating the monster group (a construction arising in group theory, a branch of algebra) and modular functions (a class of functions which are important in number theory).[39]
History
Question book-new.svg
   This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2013)
Main article: History of string theory
Early results

Some of the structures reintroduced by string theory arose for the first time much earlier as part of the program of classical unification started by Albert Einstein. The first person to add a fifth dimension to a theory of gravity was Gunnar Nordström in 1914, who noted that gravity in five dimensions describes both gravity and electromagnetism in four. Nordström attempted to unify electromagnetism with his theory of gravitation, which was however superseded by Einstein's general relativity in 1919.[40] Thereafter, German mathematician Theodor Kaluza combined the fifth dimension with general relativity, and only Kaluza is usually credited with the idea.[40] In 1926, the Swedish physicist Oskar Klein gave a physical interpretation of the unobservable extra dimension—it is wrapped into a small circle. Einstein introduced a non-symmetric metric tensor, while much later Brans and Dicke added a scalar component to gravity. These ideas would be revived within string theory, where they are demanded by consistency conditions.

String theory was originally developed during the late 1960s and early 1970s as a never completely successful theory of hadrons, the subatomic particles like the proton and neutron that feel the strong interaction. In the 1960s, Geoffrey Chew and Steven Frautschi discovered that the mesons make families called Regge trajectories with masses related to spins in a way that was later understood by Yoichiro Nambu, Holger Bech Nielsen and Leonard Susskind to be the relationship expected from rotating strings. Chew advocated making a theory for the interactions of these trajectories that did not presume that they were composed of any fundamental particles, but would construct their interactions from self-consistency conditions on the S-matrix. The S-matrix approach was started by Werner Heisenberg in the 1940s as a way of constructing a theory that did not rely on the local notions of space and time, which Heisenberg believed break down at the nuclear scale. While the scale was off by many orders of magnitude, the approach he advocated was ideally suited for a theory of quantum gravity.

Working with experimental data, R. Dolen, D. Horn and C. Schmid[41] developed some sum rules for hadron exchange. When a particle and antiparticle scatter, virtual particles can be exchanged in two qualitatively different ways. In the s-channel, the two particles annihilate to make temporary intermediate states that fall apart into the final state particles. In the t-channel, the particles exchange intermediate states by emission and absorption. In field theory, the two contributions add together, one giving a continuous background contribution, the other giving peaks at certain energies. In the data, it was clear that the peaks were stealing from the background—the authors interpreted this as saying that the t-channel contribution was dual to the s-channel one, meaning both described the whole amplitude and included the other.

The result was widely advertised by Murray Gell-Mann, leading Gabriele Veneziano to construct a scattering amplitude that had the property of Dolen-Horn-Schmid duality, later renamed world-sheet duality. The amplitude needed poles where the particles appear, on straight line trajectories, and there is a special mathematical function whose poles are evenly spaced on half the real line— the Gamma function— which was widely used in Regge theory. By manipulating combinations of Gamma functions, Veneziano was able to find a consistent scattering amplitude with poles on straight lines, with mostly positive residues, which obeyed duality and had the appropriate Regge scaling at high energy. The amplitude could fit near-beam scattering data as well as other Regge type fits, and had a suggestive integral representation that could be used for generalization.

Over the next years, hundreds of physicists worked to complete the bootstrap program for this model, with many surprises. Veneziano himself discovered that for the scattering amplitude to describe the scattering of a particle that appears in the theory, an obvious self-consistency condition, the lightest particle must be a tachyon. Miguel Virasoro and Joel Shapiro found a different amplitude now understood to be that of closed strings, while Ziro Koba and Holger Nielsen generalized Veneziano's integral representation to multiparticle scattering. Veneziano and Sergio Fubini introduced an operator formalism for computing the scattering amplitudes that was a forerunner of world-sheet conformal theory, while Virasoro understood how to remove the poles with wrong-sign residues using a constraint on the states. Claud Lovelace calculated a loop amplitude, and noted that there is an inconsistency unless the dimension of the theory is 26. Charles Thorn, Peter Goddard and Richard Brower went on to prove that there are no wrong-sign propagating states in dimensions less than or equal to 26.

In 1969, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind recognized that the theory could be given a description in space and time in terms of strings. The scattering amplitudes were derived systematically from the action principle by Peter Goddard, Jeffrey Goldstone, Claudio Rebbi, and Charles Thorn, giving a space-time picture to the vertex operators introduced by Veneziano and Fubini and a geometrical interpretation to the Virasoro conditions.

In 1970, Pierre Ramond added fermions to the model, which led him to formulate a two-dimensional supersymmetry to cancel the wrong-sign states. John Schwarz and André Neveu added another sector to the fermi theory a short time later. In the fermion theories, the critical dimension was 10. Stanley Mandelstam formulated a world sheet conformal theory for both the bose and fermi case, giving a two-dimensional field theoretic path-integral to generate the operator formalism. Michio Kaku and Keiji Kikkawa gave a different formulation of the bosonic string, as a string field theory, with infinitely many particle types and with fields taking values not on points, but on loops and curves.

In 1974, Tamiaki Yoneya discovered that all the known string theories included a massless spin-two particle that obeyed the correct Ward identities to be a graviton. John Schwarz and Joel Scherk came to the same conclusion and made the bold leap to suggest that string theory was a theory of gravity, not a theory of hadrons. They reintroduced Kaluza–Klein theory as a way of making sense of the extra dimensions. At the same time, quantum chromodynamics was recognized as the correct theory of hadrons, shifting the attention of physicists and apparently leaving the bootstrap program in the dustbin of history.

String theory eventually made it out of the dustbin, but for the following decade all work on the theory was completely ignored. Still, the theory continued to develop at a steady pace thanks to the work of a handful of devotees. Ferdinando Gliozzi, Joel Scherk, and David Olive realized in 1976 that the original Ramond and Neveu Schwarz-strings were separately inconsistent and needed to be combined. The resulting theory did not have a tachyon, and was proven to have space-time supersymmetry by John Schwarz and Michael Green in 1981. The same year, Alexander Polyakov gave the theory a modern path integral formulation, and went on to develop conformal field theory extensively. In 1979, Daniel Friedan showed that the equations of motions of string theory, which are generalizations of the Einstein equations of General Relativity, emerge from the Renormalization group equations for the two-dimensional field theory. Schwarz and Green discovered T-duality, and constructed two superstring theories—IIA and IIB related by T-duality, and type I theories with open strings. The consistency conditions had been so strong, that the entire theory was nearly uniquely determined, with only a few discrete choices.
First superstring revolution

In the early 1980s, Edward Witten discovered that most theories of quantum gravity could not accommodate chiral fermions like the neutrino. This led him, in collaboration with Luis Alvarez-Gaumé to study violations of the conservation laws in gravity theories with anomalies, concluding that type I string theories were inconsistent. Green and Schwarz discovered a contribution to the anomaly that Witten and Alvarez-Gaumé had missed, which restricted the gauge group of the type I string theory to be SO(32). In coming to understand this calculation, Edward Witten became convinced that string theory was truly a consistent theory of gravity, and he became a high-profile advocate. Following Witten's lead, between 1984 and 1986, hundreds of physicists started to work in this field, and this is sometimes called the first superstring revolution.

During this period, David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm discovered heterotic strings. The gauge group of these closed strings was two copies of E8, and either copy could easily and naturally include the standard model. Philip Candelas, Gary Horowitz, Andrew Strominger and Edward Witten found that the Calabi–Yau manifolds are the compactifications that preserve a realistic amount of supersymmetry, while Lance Dixon and others worked out the physical properties of orbifolds, distinctive geometrical singularities allowed in string theory. Cumrun Vafa generalized T-duality from circles to arbitrary manifolds, creating the mathematical field of mirror symmetry. Daniel Friedan, Emil Martinec and Stephen Shenker further developed the covariant quantization of the superstring using conformal field theory techniques. David Gross and Vipul Periwal discovered that string perturbation theory was divergent. Stephen Shenker showed it diverged much faster than in field theory suggesting that new non-perturbative objects were missing.

In the 1990s, Joseph Polchinski discovered that the theory requires higher-dimensional objects, called D-branes and identified these with the black-hole solutions of supergravity. These were understood to be the new objects suggested by the perturbative divergences, and they opened up a new field with rich mathematical structure. It quickly became clear that D-branes and other p-branes, not just strings, formed the matter content of the string theories, and the physical interpretation of the strings and branes was revealed—they are a type of black hole. Leonard Susskind had incorporated the holographic principle of Gerardus 't Hooft into string theory, identifying the long highly excited string states with ordinary thermal black hole states. As suggested by 't Hooft, the fluctuations of the black hole horizon, the world-sheet or world-volume theory, describes not only the degrees of freedom of the black hole, but all nearby objects too.
Second superstring revolution
Edward Witten

In 1995, at the annual conference of string theorists at the University of Southern California (USC), Edward Witten gave a speech on string theory that in essence united the five string theories that existed at the time, and giving birth to a new 11-dimensional theory called M-theory. M-theory was also foreshadowed in the work of Paul Townsend at approximately the same time. The flurry of activity that began at this time is sometimes called the second superstring revolution.

During this period, Tom Banks, Willy Fischler, Stephen Shenker and Leonard Susskind formulated matrix theory, a full holographic description of M-theory using IIA D0 branes.[42] This was the first definition of string theory that was fully non-perturbative and a concrete mathematical realization of the holographic principle. It is an example of a gauge-gravity duality and is now understood to be a special case of the AdS/CFT correspondence. Andrew Strominger and Cumrun Vafa calculated the entropy of certain configurations of D-branes and found agreement with the semi-classical answer for extreme charged black holes. Petr Hořava and Witten found the eleven-dimensional formulation of the heterotic string theories, showing that orbifolds solve the chirality problem. Witten noted that the effective description of the physics of D-branes at low energies is by a supersymmetric gauge theory, and found geometrical interpretations of mathematical structures in gauge theory that he and Nathan Seiberg had earlier discovered in terms of the location of the branes.

In 1997, Juan Maldacena noted that the low energy excitations of a theory near a black hole consist of objects close to the horizon, which for extreme charged black holes looks like an anti-de Sitter space. He noted that in this limit the gauge theory describes the string excitations near the branes. So he hypothesized that string theory on a near-horizon extreme-charged black-hole geometry, an anti-deSitter space times a sphere with flux, is equally well described by the low-energy limiting gauge theory, the N = 4 supersymmetric Yang–Mills theory. This hypothesis, which is called the AdS/CFT correspondence, was further developed by Steven Gubser, Igor Klebanov and Alexander Polyakov, and by Edward Witten, and it is now well-accepted. It is a concrete realization of the holographic principle, which has far-reaching implications for black holes, locality and information in physics, as well as the nature of the gravitational interaction. Through this relationship, string theory has been shown to be related to gauge theories like quantum chromodynamics and this has led to more quantitative understanding of the behavior of hadrons, bringing string theory back to its roots.
Criticisms

Some critics of string theory say that it is a failure as a theory of everything.[43][44][45][46][47][48] Notable critics include Peter Woit, Lee Smolin, Philip Warren Anderson,[49] Sheldon Glashow,[50] Lawrence Krauss,[51] Carlo Rovelli[52] and Bert Schroer.[53] Some common criticisms include:

    Very high energies needed to test quantum gravity.
    Lack of uniqueness of predictions due to the large number of solutions.
    Lack of background independence.

High energies

It is widely believed that any theory of quantum gravity would require extremely high energies to probe directly, higher by orders of magnitude than those that current experiments such as the Large Hadron Collider[54] can attain. This is because strings themselves are expected to be only slightly larger than the Planck length, which is twenty orders of magnitude smaller than the radius of a proton, and high energies are required to probe small length scales. Generally speaking, quantum gravity is difficult to test because gravity is much weaker than the other forces, and because quantum effects are controlled by Planck's constant h, a very small quantity. As a result, the effects of quantum gravity are extremely weak.
Number of solutions

String theory as it is currently understood has a huge number of solutions, called string vacua,[23] and these vacua might be sufficiently diverse to accommodate almost any phenomena we might observe at lower energies.

The vacuum structure of the theory, called the string theory landscape (or the anthropic portion of string theory vacua), is not well understood. String theory contains an infinite number of distinct meta-stable vacua, and perhaps 10520 of these or more correspond to a universe roughly similar to ours—with four dimensions, a high planck scale, gauge groups, and chiral fermions. Each of these corresponds to a different possible universe, with a different collection of particles and forces.[23] What principle, if any, can be used to select among these vacua is an open issue. While there are no continuous parameters in the theory, there is a very large set of possible universes, which may be radically different from each other. It is also suggested that the landscape is surrounded by an even more vast swampland of consistent-looking semiclassical effective field theories, which are actually inconsistent.[55]

Some physicists believe this is a good thing, because it may allow a natural anthropic explanation of the observed values of physical constants, in particular the small value of the cosmological constant.[56][57] The argument is that most universes contain values for physical constants that do not lead to habitable universes (at least for humans), and so we happen to live in the "friendliest" universe. This principle is already employed to explain the existence of life on earth as the result of a life-friendly orbit around the medium-sized sun among an infinite number of possible orbits (as well as a relatively stable location in the galaxy).
Background independence
See also: Background independence

A separate and older criticism of string theory is that it is background-dependent—string theory describes perturbative expansions about fixed spacetime backgrounds which means that mathematical calculations in the theory rely on preselecting a background as a starting point. This is because, like many quantum field theories, much of string theory is still only formulated perturbatively, as a divergent series of approximations.[citation needed]

Although the theory, defined as a perturbative expansion on a fixed background, is not background independent, it has some features that suggest non-perturbative approaches would be background-independent—topology change is an established process in string theory, and the exchange of gravitons is equivalent to a change in the background. Since there are dynamic corrections to the background spacetime in the perturbative theory, one would expect spacetime to be dynamic in the nonperturbative theory as well since they would have to predict the same spacetime.[citation needed]

This criticism has been addressed to some extent by the AdS/CFT duality, which is believed to provide a full, non-perturbative definition of string theory in spacetimes with anti-de Sitter space asymptotics. Nevertheless, a non-perturbative definition of the theory in arbitrary spacetime backgrounds is still lacking. Some hope that M-theory, or a non-perturbative treatment of string theory (such as "background independent open string field theory") will have a background-independent formulation.[citation needed]
See also

    Conformal field theory
    Glossary of string theory
    List of string theory topics
    Loop quantum gravity
    Supergravity
    Supersymmetry

References

    Sean Carroll, Ph.D., Caltech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 59, Accessed Oct. 7, 2013, "...The idea that the elementary constituents of matter are small loops of string rather than pointlike particles ... we think of string theory as a candidate theory of quantum gravity..."
    Klebanov, Igor and Maldacena, Juan (2009). "Solving Quantum Field Theories via Curved Spacetimes" (PDF). Physics Today 62: 28. Bibcode:2009PhT....62a..28K. doi:10.1063/1.3074260. Retrieved May 2013.
    http://superstringtheory.com/history/history4.html
    Schwarz, John H. (1999). "From Superstrings to M Theory". Physics Reports 315: 107. arXiv:hep-th/9807135. Bibcode:1999PhR...315..107S. doi:10.1016/S0370-1573(99)00016-2.
    Hawking, Stephen (2010). The Grand Design. Bantam Books. ISBN 055338466X.
    Woit, Peter (2006). Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. London: Jonathan Cape: New York: Basic Books. p. 174. ISBN 0-465-09275-6.
    P.C.W. Davies and J. Brown (ed), Superstrings, A Theory of Everything?, Cambridge University Press, 1988 (ISBN 0-521-35741-1).
    Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. Knopf. ISBN 0-679-45443-8.
    Sheldon Glashow. "NOVA – The elegant Universe". Pbs.org. Retrieved on 2012-07-11.
    Moore, Gregory (2005). "What is... a Brane?" (PDF). Notices of the AMS 52: 214. Retrieved June 2013.
    Aspinwall, Paul; Bridgeland, Tom; Craw, Alastair; Douglas, Michael; Gross, Mark; Kapustin, Anton; Moore, Gregory; Segal, Graeme; Szendröi, Balázs; Wilson, P.M.H., eds. (2009). Dirichlet Branes and Mirror Symmetry. American Mathematical Society.
    Duality in string theory in nLab
    Witten, Edward (1995). "String theory dynamics in various dimensions". Nuclear Physics B 443 (1): 85–126. arXiv:hep-th/9503124. Bibcode:1995NuPhB.443...85W. doi:10.1016/0550-3213(95)00158-O.
    Hellerman, Simeon; Swanson, Ian (2007). "Dimension-changing exact solutions of string theory". Journal of High Energy Physics 2007 (9): 096. arXiv:hep-th/0612051v3. Bibcode:2007JHEP...09..096H. doi:10.1088/1126-6708/2007/09/096.
    Aharony, Ofer; Silverstein, Eva (2007). "Supercritical stability, transitions, and (pseudo)tachyons". Physical Review D 75 (4). arXiv:hep-th/0612031v2. Bibcode:2007PhRvD..75d6003A. doi:10.1103/PhysRevD.75.046003.
    Duff, M. J.; Liu, James T. and Minasian, R. (1995). "Eleven Dimensional Origin of String/String Duality: A One Loop Test". Nuclear Physics B 452: 261. arXiv:hep-th/9506126v2. Bibcode:1995NuPhB.452..261D. doi:10.1016/0550-3213(95)00368-3.
    Polchinski, Joseph (1998). String Theory, Cambridge University Press ISBN 0521672295.
    The calculation of the number of dimensions can be circumvented by adding a degree of freedom, which compensates for the "missing" quantum fluctuations. However, this degree of freedom behaves similar to spacetime dimensions only in some aspects, and the produced theory is not Lorentz invariant, and has other characteristics that do not appear in nature. This is known as the linear dilaton or non-critical string.
    Botelho, Luiz C. L. and Botelho, Raimundo C. L. (1999) "Quantum Geometry of Bosonic Strings – Revisited". Centro Brasileiro de Pesquisas Físicas.
    Hübsch, T. (1997). "A Hitchhiker's Guide to Superstring Jump Gates and Other Worlds". Nuclear Physics B – Proceedings Supplements 52: 347. Bibcode:1997NuPhS..52..347H. doi:10.1016/S0920-5632(96)00589-0.
    Randall, Lisa (1999). "An Alternative to Compactification". Physical Review Letters 83 (23): 4690. arXiv:hep-th/9906064. Bibcode:1999PhRvL..83.4690R. doi:10.1103/PhysRevLett.83.4690.
    Aspinwall, Paul S.; Greene, Brian R.; Morrison, David R. (1994). "Calabi-Yau moduli space, mirror manifolds and spacetime topology change in string theory". Nuclear Physics B 416 (2): 414. arXiv:hep-th/9309097. Bibcode:1994NuPhB.416..414A. doi:10.1016/0550-3213(94)90321-2.
    Kachru, Shamit; Kallosh, Renata; Linde, Andrei; Trivedi, Sandip (2003). "De Sitter vacua in string theory". Physical Review D 68 (4). arXiv:hep-th/0301240. Bibcode:2003PhRvD..68d6005K. doi:10.1103/PhysRevD.68.046005.
    Freivogel, Ben; Kleban, Matthew; Martínez, María Rodríguez; Susskind, Leonard (2006). "Observational consequences of a landscape". Journal of High Energy Physics 2006 (3): 039. arXiv:hep-th/0505232. Bibcode:2006JHEP...03..039F. doi:10.1088/1126-6708/2006/03/039.
    Kleban, Matthew; Levi, Thomas S.; Sigurdson, Kris (2013). "Observing the multiverse with cosmic wakes". Physical Review D 87 (4). arXiv:1109.3473. Bibcode:2013PhRvD..87d1301K. doi:10.1103/PhysRevD.87.041301.
    Polchinski, Joseph (2004). "Introduction to Cosmic F- and D-Strings". arXiv:hep-th/0412244 [hep-th].
    Maldacena, J. The Large N Limit of Superconformal Field Theories and Supergravity, arXiv:hep-th/9711200
    Gubser, S. S.; Klebanov, I. R. and Polyakov, A. M. (1998). "Gauge theory correlators from non-critical string theory". Physics Letters B428: 105–114. arXiv:hep-th/9802109. Bibcode:1998PhLB..428..105G. doi:10.1016/S0370-2693(98)00377-3.
    Edward Witten (1998). "Anti-de Sitter space and holography". Advances in Theoretical and Mathematical Physics 2: 253–291. arXiv:hep-th/9802150. Bibcode:1998hep.th....2150W.
    Aharony, O.; S.S. Gubser, J. Maldacena, H. Ooguri, Y. Oz (2000). "Large N Field Theories, String Theory and Gravity". Phys. Rept. 323 (3–4): 183–386. arXiv:hep-th/9905111. Bibcode:1999PhR...323..183A. doi:10.1016/S0370-1573(99)00083-6.
    Aharony, Ofer; Bergman, Oren; Jafferis, Daniel Louis; Maldacena, Juan (2008). "N = 6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals". Journal of High Energy Physics 2008 (10): 091. arXiv:0806.1218. Bibcode:2008JHEP...10..091A. doi:10.1088/1126-6708/2008/10/091.
    6d (2,0)-supersymmetric QFT in nLab
    Kovtun, P. K.; Son, Dam T.; Starinets, A. O. (2001). "Viscosity in strongly interacting quantum field theories from black hole physics". Physical review letters 94 (11).
    Luzum, Matthew; Romatschke, Paul (2008). "Conformal relativistic viscous hydrodynamics: Applications to RHIC results at sqrt [s_ {NN}]= 200 GeV". Physical Review C 78 (3). arXiv:0804.4015. Bibcode:2008PhRvC..78c4915L. doi:10.1103/PhysRevC.78.034915.
    Merali, Zeeya (2011). "Collaborative physics: string theory finds a bench mate". Nature 478 (7369): 302–304. Bibcode:2011Natur.478..302M. doi:10.1038/478302a. PMID 22012369.
    Sachdev, Subir (2013). "Strange and stringy". Scientific American 308 (44): 44. Bibcode:2012SciAm.308a..44S. doi:10.1038/scientificamerican0113-44.
    Deligne, Pierre; Etingof, Pavel; Freed, Daniel; Jeffery, Lisa; Kazhdan, David; Morgan, John; Morrison, David; Witten, Edward, eds. (1999). Quantum Fields and Strings: A Course for Mathematicians 1. American Mathematical Society. p. 1. ISBN 0821820125.
    Hori, Kentaro; Katz, Sheldon; Klemm, Albrecht; Pandharipande, Rahul; Thomas, Richard; Vafa, Cumrun; Vakil, Ravi; Zaslow, Eric, eds. (2003). Mirror Symmetry. American Mathematical Society. ISBN 0821829556.
    Frenkel, Igor; Lepowsky, James; Meurman, Arne (1988). Vertex operator algebras and the Monster. Pure and Applied Mathematics 134. Boston: Academic Press. ISBN 0-12-267065-5.
    http://www.hs.fi/tiede/Suomalaistutkija+kilpaili+Einsteinin+kanssa+ja+keksi+viidennen+ulottuvuuden/a1410754065152
    Dolen, R.; Horn, D.; Schmid, C. (1968). "Finite-Energy Sum Rules and Their Application to πN Charge Exchange". Physical Review 166 (5): 1768. Bibcode:1968PhRv..166.1768D. doi:10.1103/PhysRev.166.1768.
    Banks, T.; Fischler, W.; Shenker, S. H.; Susskind, L. (1997). "M theory as a matrix model: A conjecture". Physical Review D 55 (8): 5112. arXiv:hep-th/9610043v3. Bibcode:1997PhRvD..55.5112B. doi:10.1103/PhysRevD.55.5112.
    Woit, Peter Not Even Wrong. Math.columbia.edu. Retrieved on 2012-07-11.
    Smolin, Lee. The Trouble With Physics. Thetroublewithphysics.com. Retrieved on 2012-07-11.
    The n-Category Cafe. Golem.ph.utexas.edu (2007-02-25). Retrieved on 2012-07-11.
    John Baez weblog. Math.ucr.edu (2007-02-25). Retrieved on 2012-07-11.
    Woit, P. (Columbia University), String theory: An Evaluation, February 2001, arXiv:physics/0102051
    Woit, P. Is String Theory Testable? INFN Rome March 2007
    God (or Not), Physics and, of Course, Love: Scientists Take a Leap, New York Times, 4 January 2005: "String theory is the first science in hundreds of years to be pursued in pre-Baconian fashion, without any adequate experimental guidance"
    "there ain't no experiment that could be done nor is there any observation that could be made that would say, `You guys are wrong.' The theory is safe, permanently safe" NOVA interview
    Krauss, Lawrence (8 November 2005) Science and Religion Share Fascination in Things Unseen. New York Times: "String theory [is] yet to have any real successes in explaining or predicting anything measurable".
    Rovelli, Carlo (2003). "A Dialog on Quantum Gravity". International Journal of Modern Physics D [Gravitation; Astrophysics and Cosmology] 12 (9): 1509. arXiv:hep-th/0310077. Bibcode:2003IJMPD..12.1509R. doi:10.1142/S0218271803004304.
    Schroer, B. (2008) String theory and the crisis of particle physics II or the ascent of metaphoric arguments, arXiv:0805.1911
    Kiritsis, Elias (2007) String Theory in a Nutshell, Princeton University Press, ISBN 1400839335.
    Vafa, Cumrun (2005). "The String landscape and the swampland". arXiv:hep-th/0509212.
    Arkani-Hamed, N.; Dimopoulos, S. and Kachru, S. Predictive Landscapes and New Physics at a TeV, arXiv:hep-th/0501082, SLAC-PUB-10928, HUTP-05-A0001, SU-ITP-04-44, January 2005
    Susskind, L. The Anthropic Landscape of String Theory, arXiv:hep-th/0302219, February 2003

Further reading
Popular books
General

    Davies, Paul; Julian R. Brown (Eds.) (1992). Superstrings: A Theory of Everything?. Cambridge: Cambridge University Press. ISBN 0-521-43775-X.
    Greene, Brian (2003). The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. New York: W.W. Norton & Company. ISBN 0-393-05858-1.
    Greene, Brian (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. New York: Alfred A. Knopf. ISBN 0-375-41288-3.
    Kaku, Michio (1994). Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension. Oxford: Oxford University Press. ISBN 0-19-508514-0.
    Musser, George (2008). The Complete Idiot's Guide to String Theory. Indianapolis: Alpha. ISBN 978-1-59257-702-6.
    Randall, Lisa (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. New York: Ecco Press. ISBN 0-06-053108-8.
    Susskind, Leonard (2006). The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. New York: Hachette Book Group/Back Bay Books. ISBN 0-316-01333-1.
    Yau, Shing-Tung; Nadis, Steve (2010). The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions. Basic Books. ISBN 978-0-465-02023-2.

Critical

    Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. Knopf. ISBN 0-679-45443-8.
    Smolin, Lee (2006). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. New York: Houghton Mifflin Co. ISBN 0-618-55105-0.
    Woit, Peter (2006). Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law. London: Jonathan Cape &: New York: Basic Books. ISBN 978-0-465-09275-8.

Textbooks
For physicists

    Becker, Katrin, Becker, Melanie, and Schwarz, John (2007) String Theory and M-Theory: A Modern Introduction . Cambridge University Press. ISBN 0-521-86069-5
    Dine, Michael (2007) Supersymmetry and String Theory: Beyond the Standard Model. Cambridge University Press. ISBN 0-521-85841-0.
    Kiritsis, Elias (2007) String Theory in a Nutshell. Princeton University Press. ISBN 978-0-691-12230-4.
    Michael Green, John H. Schwarz and Edward Witten (1987) Superstring theory. Cambridge University Press.
        Vol. 1: Introduction. ISBN 0-521-35752-7.
        Vol. 2: Loop amplitudes, anomalies and phenomenology. ISBN 0-521-35753-5.
    Johnson, Clifford (2003). D-branes. Cambridge: Cambridge University Press. ISBN 0-521-80912-6.
    Polchinski, Joseph (1998) String theory. Cambridge University Press.
        Vol. 1: An Introduction to the Bosonic String. ISBN 0-521-63303-6.
        Vol. 2: Superstring Theory and Beyond. ISBN 0-521-63304-4.
    Szabo, Richard J. (2007) An Introduction to String Theory and D-brane Dynamics. Imperial College Press. ISBN 978-1-86094-427-7.
    Zwiebach, Barton (2004) A First Course in String Theory. Cambridge University Press. ISBN 0-521-83143-1.

For mathematicians

    Aspinwall, Paul; Bridgeland, Tom; Craw, Alastair; Douglas, Michael; Gross, Mark; Kapustin, Anton; Moore, Gregory; Segal, Graeme; Szendröi, Balázs; Wilson, P.M.H., eds. (2009). Dirichlet Branes and Mirror Symmetry. American Mathematical Society.
    Deligne, Pierre; Etingof, Pavel; Freed, Daniel; Jeffery, Lisa; Kazhdan, David; Morgan, John; Morrison, David; Witten, Edward, eds. (1999). Quantum Fields and Strings: A Course for Mathematicians. American Mathematical Society. ISBN 0821820125.
    Hori, Kentaro; Katz, Sheldon; Klemm, Albrecht; Pandharipande, Rahul; Thomas, Richard; Vafa, Cumrun; Vakil, Ravi; Zaslow, Eric, eds. (2003). Mirror Symmetry. American Mathematical Society. ISBN 0821829556.

Online material

    Klebanov, Igor and Maldacena, Juan (January 2009). "Solving Quantum Field Theories via Curved Spacetimes". Physics Today.
    Schwarz, John H. (2000). "Introduction to Superstring Theory". arXiv:hep-ex/0008017 [hep-ex].
    Witten, Edward (June 2002). "The Universe on a String" (PDF). Astronomy Magazine. Retrieved December 19, 2005.
    Witten, Edward (1998). "Duality, Spacetime and Quantum Mechanics". Kavli Institute for Theoretical Physics. Retrieved December 16, 2005.
    Woit, Peter (2002). "Is string theory even wrong?". American Scientist. Retrieved December 16, 2005.

External links
   Look up string theory in Wiktionary, the free dictionary.
   Wikimedia Commons has media related to String theory.

    Why String Theory—An introduction to string theory.
    Dialogue on the Foundations of String Theory at MathPages
    Superstrings! String Theory Home Page—Online tutorial
    A Layman’s Guide to String Theory—An explanation for the layperson
    Not Even Wrong—A blog critical of string theory
    The Official String Theory Web Site
    The Elegant Universe—A three-hour miniseries with Brian Greene by NOVA (original PBS Broadcast Dates: October 28, 8–10 p.m. and November 4, 8–9 p.m., 2003). Various images, texts, videos and animations explaining string theory.
    Beyond String Theory—A project by a string physicist explaining aspects of string theory to a broad audience
    String Theory and M-Theory a serious but amusing lecture with not to complicated math and not too advanced physics, by Prof. Leonard Susskind at Stanford University


Statefarm | Heroic Invincible!
 
more |
XBL: Sapid Statefarm
PSN:
Steam: Statefarm
ID: Statefarm
IP: Logged

3,728 posts
Moms spaghetti
The world's shortest book was a joke template that has been popular since the end of World War II.

The reference template was ethnic in nature with typical examples including: "Italian War Heroes" "German Comedians" "Blacks I've met Yachting" and "Light Jewish Cuisine." Another template consists of a book title and an author, which when paired together would be nonsensical. An example would be "Theory of Racial Harmony, by George Wallace" (the gist of the joke being that George Wallace was famous for his racial bigotry).[1]

A related joke template is the generic book title "Everything that X knows about Y", the book being entirely composed of blank pages. Examples include the (fictitious) books "Everything the average man knows about women", "Everything sports club owners know about on-field tactics", "Everything politicians know about how to run an economy" and "Things Better than Boobs".



 
Mat Cauthon
| Ravens
 
more |
Anime
From Wikipedia, the free encyclopedia
"Animé" redirects here. For the oleo-resin, see Animé (oleo-resin).
Page semi-protectedPage semi-protected
Part of a series on
Anime and Manga
Anime eye.svg
Anime
History Industry Original net animation
Original video animation Fansub Fandub
Companies Longest series
Manga
History International market Mangaka (list)
Dōjinshi Scanlation Publishers
Best-selling series Longest series
Alternative
Demographic groups
Children Shōnen Shōjo Seinen Josei
Genres
Harem Magical girl Mecha Yaoi Yuri
Others
Selected biographies
Mitsuru Adachi Fujio Akatsuka
George Akiyama Hideaki Anno
Hideo Azuma Clamp
Osamu Dezaki Hisashi Eguchi
Tetsuo Hara Yukinobu Hoshino
Ryoichi Ikegami Kunihiko Ikuhara
Ken Ishikawa Shotaro Ishinomori
Ikki Kajiwara Tomoharu Katsumata
Yoshiaki Kawajiri Shōji Kawamori
Rakuten Kitazawa Kazuo Koike
Satoshi Kon Masami Kurumada
Kōichi Mashimo Katsuji Matsumoto
Leiji Matsumoto Hayao Miyazaki
Shigeru Mizuki Hideko Mizuno
Shinji Mizushima Daijiro Morohoshi
Tadao Nagahama Go Nagai
Shinji Nagashima Daisuke Nishio
Eiichiro Oda Kyoko Okazaki
Mamoru Oshii Katsuhiro Otomo
Rintaro Takao Saito
Hiroshi Sasagawa Junichi Sato
Sanpei Shirato Masamune Shirow
Gisaburō Sugii Isao Takahata
Osamu Tezuka Akira Toriyama
Year 24 Group Tatsuo Yoshida
Fandom
Conventions (list, attendance) Clubs
Cosplay Anime music video Otaku
Yaoi fandom
General
Omake Terminology Iconography
Anime and Manga Portal
v t e
Anime (Japanese: アニメ?, [anime] ( listen); English Listeni/ˈænɨmeɪ/) are Japanese animated productions usually featuring hand-drawn or computer animation. The word is the abbreviated pronunciation of "animation" in Japanese, where this term references all animation.[1] In other languages, the term is defined as animation from Japan or as a Japanese-disseminated animation style often characterized by colorful graphics, vibrant characters and fantastic themes.[2][3] Arguably, the stylization approach to the meaning may open up the possibility of anime produced in countries other than Japan.[4][5][6] For simplicity, many Westerners strictly view anime as an animation product from Japan.[3]

The earliest commercial Japanese animation dates to 1917, and production of anime works in Japan has since continued to increase steadily. The characteristic anime art style emerged in the 1960s with the works of Osamu Tezuka and spread internationally in the late twentieth century, developing a large domestic and international audience. Anime is distributed theatrically, by television broadcasts, directly to home media, and over the internet and is classified into numerous genres targeting diverse broad and niche audiences.

Anime is a diverse art form with distinctive production methods and techniques that have been adapted over time in response to emergent technologies. The production of anime focuses less on the animation of movement and more on the realism of settings as well as the use of camera effects, including panning, zooming and angle shots. Diverse art styles are used and character proportions and features can be quite varied, including characteristically large emotive or realistically sized eyes.

The anime industry consists of over 430 production studios including major names like Studio Ghibli, Gainax and Toei Animation. Despite having a fraction of the domestic film market, anime achieves a majority of DVD sales and has been an international success after the rise of televised English dubs. This rise in international popularly has resulted in non-Japanese productions using the anime art style, but these works have been defined as anime-influenced animation by both fans and the industry.

Contents  [hide]
1 Definition and usage
2 Format
3 History
4 Genres
5 Attributes
5.1 Animation technique
5.2 Characters
5.3 Music
6 Industry
6.1 Awards
7 Influence on world culture
7.1 Fan response
7.2 Anime style
8 See also
9 References
10 External links
Definition and usage
Anime is an art form, specifically animation, that includes all genres found in cinema, but it can be mistakenly classified as a genre.[7]:7 In Japan, the term anime refers to all forms of animation from around the world.[1][8] English-language dictionaries define anime as a "Japanese-style animated film or television entertainment" or as "a style of animation created in Japan".[2][9]

The etymology of the word anime is disputed. The English term "animation" is written in Japanese katakana as アニメーション (animēshon, pronounced [animeːɕoɴ]),[3] and is アニメ (anime) in its shortened form.[3] Some sources claim that anime derives from the French term for animation, dessin animé,[10][11] but others believe this to be a myth derived from the French popularity of the medium in the late 1970s and 1980s.[3] In English, anime, when used as a common noun, normally functions as a mass noun (for example: "Do you watch anime?", "How much anime have you collected?").[12] Prior to the widespread use of anime, the term Japanimation was prevalent throughout the 1970s and 1980s. In the mid-1980s, the term anime began to supplant Japanimation.[10][13] In general, the term now only appears in period works where it is used to distinguish and identify Japanese animation.[13]

In 1987, Hayao Miyazaki stated that he despised the truncated word "Anime" because to him it represented the desolation of the Japanese animation industry. He equated the desolation with animators lacking motivation and mass-produced, overly expressive products which rely on fixed iconography for facial expressions and protracted and exaggerated action scenes but lack depth and sophistication because they do not attempt to convey emotion or thought.[14]

Format
The first format of anime was theatrical viewing which originally began with commercial productions in 1917.[15] Originally the animated flips were crude and required played musical components before adding sound and vocal components to the production. On July 14, 1958, Nippon Television aired Mole's Adventure, both the first televised and first color anime to debut.[16] It wasn't until the 1960s when the first televised series were broadcast and it has remained a popular medium since.[7]:13 Works released in a direct to video format are called "original video animation" (OVA) or "original animation video" (OAV); and are typically not released theatrically or televised prior to home media release.[7]:14[17] The emergence of the internet has led some animators to distribute works online in a format called "original net anime" (ONA).[18]

The home distribution of anime releases were popularized in the 1980s with the VHS and Laser Disc formats.[7]:14 The VHS NTSC video format used in both Japan and the United States is credited as aiding the rising popularity of anime in the 1990s.[7]:14 The Laser Disc and VHS formats were transcended by the DVD format which offered the unique advantages; including multiple subtitling and dubbing tracks on the same disc.[7]:15 The DVD format also has its drawbacks in the its usage of region coding; adopted by the industry to solve licensing, piracy and export problems and restricted region indicated on the DVD player.[7]:15 The Video CD (VCD) format was popular in Hong Kong and Taiwan, but became only a minor format in the United States that was closely associated with bootleg copies.[7]:15

History
Main article: History of anime

A cel from the earliest surviving Japanese animated short, produced in 1917
Anime arose in the early 20th century, when Japanese filmmakers experimented with the animation techniques also pioneered in France, Germany, the United States and Russia.[11] A claim for the earliest Japanese animation is Katsudō Shashin, an undated and private work by an unknown creator.[19] In 1917, the first professional and publicly displayed works began to appear. Animators such as Ōten Shimokawa and Seitarou Kitayama produced numerous works, with the oldest surviving film being Kouchi's Namakura Gatana, a two-minute clip of a samurai trying to test a new sword on his target only to suffer defeat.[15][20][21] The 1923 Great Kantō earthquake resulted in widespread destruction to Japan's infrastructure and the destruction of Shimokawa's warehouse, destroying most of these early works.

By the 1930s animation was well established in Japan as an alternative format to the live-action industry. It suffered competition from foreign producers and many animators, such as Noburō Ōfuji and Yasuji Murata, who still worked in cheaper cutout animation rather than cel animation.[22] Other creators, such as Kenzō Masaoka and Mitsuyo Seo, nonetheless made great strides in animation technique; they benefited from the patronage of the government, which employed animators to produce educational shorts and propaganda.[23] The first talkie anime was Chikara to Onna no Yo no Naka, produced by Masaoka in 1933.[24][25] By 1940, numerous anime artists' organizations had risen, including the Shin Mangaha Shudan and Shin Nippon Mangaka.[26] The first feature length animated film was Momotaro's Divine Sea Warriors directed by Seo in 1944 with sponsorship by the Imperial Japanese Navy.[27]


A frame from Momotaro's Divine Sea Warriors (1944), the first feature-length anime film
The success of The Walt Disney Company's 1937 feature film Snow White and the Seven Dwarfs profoundly influenced many Japanese animators.[28] In the 1960s, manga artist and animator Osamu Tezuka adapted and simplified many Disney animation techniques to reduce costs and to limit the number of frames in productions. He intended this as a temporary measure to allow him to produce material on a tight schedule with inexperienced animation staff.[29] Three Tales, aired in 1960, was the first anime shown on television. The first anime television series was Otogi Manga Calendar, aired from 1961 to 1964.

The 1970s saw a surge of growth in the popularity of manga, Japanese comic books and graphic novels, many of which were later animated. The work of Osamu Tezuka drew particular attention: he has been called a "legend"[30] and the "god of manga".[31][32] His work – and that of other pioneers in the field – inspired characteristics and genres that remain fundamental elements of anime today. The giant robot genre (known as "mecha" outside Japan), for instance, took shape under Tezuka, developed into the Super Robot genre under Go Nagai and others, and was revolutionized at the end of the decade by Yoshiyuki Tomino who developed the Real Robot genre. Robot anime like the Gundam and The Super Dimension Fortress Macross series became instant classics in the 1980s, and the robot genre of anime is still one of the most common in Japan and worldwide today. In the 1980s, anime became more accepted in the mainstream in Japan (although less than manga), and experienced a boom in production. Following a few successful adaptations of anime in overseas markets in the 1980s, anime gained increased acceptance in those markets in the 1990s and even more at the turn of the 21st century. In 2002, Spirited Away, a Studio Ghibli production directed by Hayao Miyazaki won the Golden Bear at the Berlin International Film Festival and in 2003 at the 75th Academy Awards it won the Academy Award for Best Animated Feature.

Genres
Anime are often classified by target demographic, including kodomo (children's), shōjo (girls'), shounen (boys') and a diverse range of genres targeting an adult audience. Shoujo and shounen anime sometimes contain elements popular with children of both sexes in an attempt to gain crossover appeal. Adult anime may feature a slower pace or greater plot complexity that younger audiences typically find unappealing, as well as adult themes and situations.:44–48 A subset of adult anime works feature pornographic elements and are labeled "R18" in Japan, but internationally these works are grouped together under the term hentai (Japanese for "pervert"). By contrast, a variety of anime sub-genres across demographic groups incorporate ecchi, sexual themes or undertones without depictions of sexual intercourse, as typified in the comedic or harem genres; due to its popularity among adolescent and adult anime enthusiasts, incorporation of ecchi elements in anime is considered a form of fan service.[33][34]:89

Anime's genre classification is different from other types of animation and does not lend itself to simple identity.[7]:34 Gilles Poitras compared the labeling Gundam 0080 and its complex depiction of war as a "giant robot" anime akin to simply labeling War and Peace a "war novel".[7]:34 Science fiction is a major anime genre and includes important historical works like Tezuka's Astro Boy and Yokoyama's Tetsujin 28-go. A major sub-genre of science fiction is mecha, with the Gundam metaseries being iconic.[7]:35 The diverse fantasy genre includes works based on Asian and Western traditions and folklore; examples include the Japanese feudal fairytale InuYasha, and the depiction of Scandinavian goddesses who move to Japan to maintain a computer called Yggdrasil in Oh My Goddess.[7]:37–40 Genre crossing in anime is also prevalent, such as the blend of fantasy and comedy in Dragon Half, and the incorporation of slapstick humor in the crime anime Castle of Cagliostro.[7]:41–43 Other subgenres found in anime include magical girl, harem, sports, martial arts, literary adaptations and war.[7]:45–49

Genres have emerged that explore homosexual romances. While originally pornographic in terminology, yaoi (male homosexuality) and yuri (female homosexuality) are broad terms used internationally to describe any focus on the themes or development of romantic homosexual relationships. Prior to 2000, homosexual characters were typically used for comedic effect, but some works portrayed these characters seriously or sympathetically.[7]:50

Attributes

Anime artists employ many distinct visual styles
Anime differs greatly from other forms of animation by its diverse art styles, methods of animation, its production, and its process. Visually, anime is a diverse art form that contains a wide variety of art styles, differing from one creator, artist, and studio. While no one art style predominates anime as a whole, they do share some similar attributes in terms of animation technique and character design. Any other visual variation falls under the artists as they see fit.

Animation technique
Anime follows the typical production of animation, including storyboarding, voice acting, character design, and cel production. Since the 1990s, animators have increasingly used computer animation to improve the efficiency of the production process. Artists like Noburō Ōfuji pioneered the earliest anime works, which were experimental and consisted of images drawn on blackboards, stop motion animation of paper cutouts, and silhouette animation.[35][36] Cel animation grew in popularity until it came to dominate the medium. In the 21st century, the use of other animation techniques is mostly limited to independent short films,[37] including the stop motion puppet animation work produced by Tadahito Mochinaga, Kihachirō Kawamoto and Tomoyasu Murata.[38][39] Computers were integrated into the animation process in the 1990s, with works such as Ghost in the Shell and Princess Mononoke mixing cel animation with computer-generated images.[7]:29 Fuji Film, a major cel production company, announced it would stop cel production, producing an industry panic to procure cel imports and hastening the switch to digital processes.[7]:29

Prior to the digital era, anime was produced with traditional animation methods using a pose to pose approach.[35] The majority of mainstream anime uses fewer expressive key frames and more in-between animation.[40]

Japanese animation studios were pioneers of many limited animation techniques, and have given anime a distinct set of conventions. Unlike Disney animation, where the emphasis is on the movement, anime emphasizes the art quality and let limited animation techniques make up for the lack of time spent on movement. Such techniques are often used not only to meet deadlines but also as artistic devices.[41] Anime scenes place emphasis on achieving three-dimensional views, and backgrounds are instrumental in creating the atmosphere of the work.[11] The backgrounds are not always invented and are occasionally based on real locations, as exemplified in Howl's Moving Castle and The Melancholy of Haruhi Suzumiya.[42][43] Oppliger stated that anime is one of the rare mediums where putting together an all-star cast usually comes out looking "tremendously impressive".[44]

The cinematic effects of anime differentiates itself from the stage plays found in American animation. Anime is cinematically shot as if by camera, including panning, zooming, distance and angle shots to more complex dynamic shots that would be difficult to produce in reality.[7]:58[45][46] In anime, the animation is produced before the voice acting, contrary to American animation which does the voice acting first; this can cause lip sync errors in the Japanese version.[7]:59

Characters
Body proportions of human anime characters tend to accurately reflect the proportions of the human body in reality. The height of the head is considered by the artist as the base unit of proportion. Head heights can vary, but most anime characters are about seven to eight heads tall.[47] Anime artists occasionally make deliberate modifications to body proportions to produce super deformed characters that feature a disproportionately small body compared to the head; many super deformed characters are two to four heads tall. Some anime works like Crayon Shin-chan completely disregard these proportions, such that they resemble Western cartoons.

A common anime character design convention is exaggerated eye size. The animation of characters with large eyes in anime can be traced back to Osamu Tezuka, who was deeply influenced by such early animation characters as Betty Boop, who was drawn with disproportionately large eyes. Tezuka is a central figure in anime and manga history, whose iconic art style and character designs allowed for the entire range of human emotions to be depicted solely through the eyes.[7]:60 The artist adds variable color shading to the eyes and particularly to the cornea to give them greater depth. Generally, a mixture of a light shade, the tone color, and a dark shade is used.[48][49] Cultural anthropologist Matt Thorn argues that Japanese animators and audiences do not perceive such stylized eyes as inherently more or less foreign.[50] However, not all anime have large eyes. For example, the works of Hayao Miyazaki are known for having realistically proportioned eyes, as well as realistic hair colors on their characters.[51]


Anime and manga artists often draw from a defined set of facial expressions to depict particular emotions
Hair in anime is often unnaturally lively and colorful or uniquely styled. The movement of hair in anime is exaggerated and "hair action" is used to emphasize the action and emotions of characters for added visual effect.[7]:62 Poitras traces hairstyle color to cover illustrations on manga, where eye-catching artwork and colorful tones are attractive for children's manga.[7]:61 Despite being produced for a domestic market, anime features characters whose race or nationality is not always defined, and this is often a deliberate decision, such as in the Pokémon animated series.[52]

Anime and manga artists often draw from a common canon of iconic facial expression illustrations to denote particular moods and thoughts.[53] These techniques are often different in form than their counterparts in Western animation, and they include a fixed iconography that is used as shorthand for certain emotions and moods.[54] These expression are often exaggerated and are typically comedic in nature. For example, a male character may develop a nosebleed when aroused, stemming from a Japanese old wives' tale.[54] A variety of visual symbols are employed, including sweatdrops to depict nervousness, visible blushing for embarrassment, or glowing eyes for an intense glare.[55]:52

Music
The opening and credits sequences of most anime television episodes are accompanied by Japanese pop or rock songs, often by reputed bands. They may be written with the series in mind, but are also aimed at the general music market, and therefore often allude only vaguely or not at all to the themes or plot of the series. Pop and rock songs are also sometimes used as incidental music ("insert songs") in an episode, often to highlight particularly important scenes. More often than not, background music is employed as an added flavor to series either to drive story plot lines or to simply to decorate particular scenes and animated sequences. Furthermore, some series offer all applied music available in the form of OST, or original soundtracks.[56]

Industry
The animation industry consists of more than 430 production companies with some of the major studios including Toei Animation, Gainax, Madhouse, Gonzo, Sunrise, Bones, TMS Entertainment, Nippon Animation, Studio Pierrot and Studio Ghibli.[55]:17 Many of the studios are organized into a trade association, The Association of Japanese Animations. There is also a labor union for workers in the industry, the Japanese Animation Creators Association. Studios will often work together to produce more complex and costly projects, as done with Studio Ghibli's Spirited Away.[55]:17 An anime episode can cost between US$100,000 and US$300,000 to produce.[57] In 2001, animation accounted for 7% of the Japanese film market, above the 4.6% market share for live-action works.[55]:17 The popularity and success of anime is seen through the profitability of the DVD market, contributing nearly 70% of total sales.[55]:17 Spirited Away (2001) is the highest-grossing anime film, with US$274,925,095.[58]

The anime market for the United States was worth approximately $2.74 billion in 2009.[59] Dubbed animation began airing in the United States in 2000 on networks like The WB and Cartoon Network's Adult Swim.[55]:18 In 2005, this resulted in five of the top ten anime titles having previously aired on Cartoon Network.[55]:18 As a part of localization, some editing of cultural references may occur to better follow the references of the non-Japanese culture.[60] The cost of English localization averages US $10,000 per episode.[61]

The industry has been subject to both praise and condemnation for fansubs, the addition of unlicensed and unauthorized subtitled translations of anime series or films.[55]:206 Fansubs, which were originally distributed on VHS bootlegged cassettes in the 1980s, have been freely available and disseminated online since the 1990s.[55]:206 Fansubbers tend to adhere to an unwritten code to destroy or no longer distribute an anime once an official translated or subtitled version becomes licensed, although fansubs typically continue to circulate through file sharing networks.[55]:207

Legal international availability of anime on the internet has changed in recent years, with simulcasts of series available on websites like Crunchyroll.

Awards
The anime industry has several annual awards which honor the year's best works. Major annual awards in Japan include the Ōfuji Noburō Award, the Mainichi Film Award for Best Animation Film, the Animation Kobe Awards, the Japan Media Arts Festival animation awards, the Tokyo Anime Award and the Japan Academy Prize for Animation of the Year. In the United States, anime films compete in the ICv2.com Anime Awards[55]:257–258 There were also the American Anime Awards, which were designed to recognize excellence in anime titles nominated by the industry, and were held only once in 2006.[55]:258 Anime productions are also nominated and win awards not exclusively for anime.

Influence on world culture

Akihabara district of Tokyo is the center of otaku subculture in Japan.
Anime has become commercially profitable in Western countries, as demonstrated by early commercially successful Western adaptations of anime, such as Astro Boy. Since the 19th century, many Westerners have expressed a particular interest towards Japan and anime has dramatically exposed more Westerners to the culture of Japan.

Fan response
Anime clubs gave rise to anime conventions in the 1990s with the "anime boom", a period marked by increased popularity of anime.[7]:73 These conventions are dedicated to anime and manga and include elements like cosplay contests and industry talk panels.[55]:211 Cosplay, a portmanteau for "costume play", is not unique to anime and has become popular in contests and masquerades at anime conventions.[55]:214–215 Japanese culture and words have entered English usage through the popularity of the medium, including otaku, a derogatory Japanese term commonly used in English to denote a fan of anime and manga.[55]:195 Anime enthusiasts have produced fan fiction and fan art, including computer wallpaper and anime music videos.[55]:201–205

Last Edit: November 26, 2014, 05:46:51 AM by Byrne


 
Mat Cauthon
| Ravens
 
more |
Nuclear fission
From Wikipedia, the free encyclopedia
"Splitting the atom" redirects here. For the EP, see Splitting the Atom.

An induced fission reaction. A neutron is absorbed by a uranium-235 nucleus, turning it briefly into an excited uranium-236 nucleus, with the excitation energy provided by the kinetic energy of the neutron plus the forces that bind the neutron. The uranium-236, in turn, splits into fast-moving lighter elements (fission products) and releases three free neutrons. At the same time, one or more "prompt gamma rays" (not shown) are produced, as well.
Nuclear physics
NuclearReaction.png
Nucleus · Nucleons (p, n) · Nuclear force · Nuclear reaction
Nuclear models and stability[show]
Nuclides' classification[show]
Radioactive decay[show]
Nuclear fission[show]
Capturing processes[show]
High energy processes[show]
Nucleosynthesis topics[show]
Scientists[show]
v t e
In nuclear physics and nuclear chemistry, nuclear fission is either a nuclear reaction or a radioactive decay process in which the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often produces free neutrons and photons (in the form of gamma rays), and releases a very large amount of energy even by the energetic standards of radioactive decay.

Nuclear fission of heavy elements was discovered on December 17, 1938 by Otto Hahn and his assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. It is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place). In order for fission to produce energy, the total binding energy of the resulting elements must be less negative (higher energy) than that of the starting element.

Fission is a form of nuclear transmutation because the resulting fragments are not the same element as the original atom. The two nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes.[1][2] Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus.

Fission as encountered in the modern world is usually a deliberately produced man-made nuclear reaction induced by a neutron. It is less commonly encountered as a natural form of spontaneous radioactive decay (not requiring a neutron), occurring especially in very high-mass-number isotopes. The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum-tunnelling processes such as proton emission, alpha decay and cluster decay, which give the same products each time. Nuclear fission produces energy for nuclear power and drives the explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes possible a self-sustaining nuclear chain reaction that releases energy at a controlled rate in a nuclear reactor or at a very rapid uncontrolled rate in a nuclear weapon.

The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. Concerns over nuclear waste accumulation and over the destructive potential of nuclear weapons may counterbalance the desirable qualities of fission as an energy source, and give rise to ongoing political debate over nuclear power.

Contents  [hide]
1 Physical overview
1.1 Mechanism
1.2 Energetics
1.2.1 Input
1.2.2 Output
1.3 Product nuclei and binding energy
1.4 Origin of the active energy and the curve of binding energy
1.5 Chain reactions
1.6 Fission reactors
1.7 Fission bombs
2 History
2.1 Discovery of nuclear fission
2.2 Fission chain reaction realized
2.3 Manhattan Project and beyond
2.4 Natural fission chain-reactors on Earth
3 See also
4 Notes
5 References
6 External links
Physical overview[edit]
Mechanism[edit]

A visual representation of an induced nuclear fission event where a slow-moving neutron is absorbed by the nucleus of a uranium-235 atom, which fissions into two fast-moving lighter elements (fission products) and additional neutrons. Most of the energy released is in the form of the kinetic velocities of the fission products and the neutrons.

Fission product yields by mass for thermal neutron fission of U-235, Pu-239, a combination of the two typical of current nuclear power reactors, and U-233 used in the thorium cycle.
Nuclear fission can occur without neutron bombardment as a type of radioactive decay. This type of fission (called spontaneous fission) is rare except in a few heavy isotopes. In engineered nuclear devices, essentially all nuclear fission occurs as a "nuclear reaction" — a bombardment-driven process that results from the collision of two subatomic particles. In nuclear reactions, a subatomic particle collides with an atomic nucleus and causes changes to it. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively constant exponential decay and half-life characteristic of spontaneous radioactive processes.

Many types of nuclear reactions are currently known. Nuclear fission differs importantly from other types of nuclear reactions, in that it can be amplified and sometimes controlled via a nuclear chain reaction (one type of general chain reaction). In such a reaction, free neutrons released by each fission event can trigger yet more events, which in turn release more neutrons and cause more fissions.

The chemical element isotopes that can sustain a fission chain reaction are called nuclear fuels, and are said to be fissile. The most common nuclear fuels are 235U (the isotope of uranium with an atomic mass of 235 and of use in nuclear reactors) and 239Pu (the isotope of plutonium with an atomic mass of 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 u (fission products). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha/beta decay chain over periods of millennia to eons. In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events.

Nuclear fissions in fissile fuels are the result of the nuclear excitation energy produced when a fissile nucleus captures a neutron. This energy, resulting from the neutron capture, is a result of the attractive nuclear force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop," to the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged nucleons together, and when this happens, the two fragments complete their separation and then are driven further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons).

The liquid drop model of the atomic nucleus predicts equal-sized fission products as an outcome of nuclear deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the more energetically favorable outcome, in which one fission product is slightly smaller than the other. A theory of the fission based on shell model has been formulated by Maria Goeppert Mayer.

The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a fragment as argon (Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.[3]

Energetics[edit]
Input[edit]

The stages of binary fission in a liquid drop model. Energy input deforms the nucleus into a fat "cigar" shape, then a "peanut" shape, followed by binary fission as the two lobes exceed the short-range strong force attraction distance, then are pushed apart and away by their electrical charge. In the liquid drop model, the two fission fragments are predicted to be the same size. The nuclear shell model allows for them to differ in size, as usually experimentally observed.
The fission of a heavy nucleus requires a total input energy of about 7 to 8 million electron volts (MeV) to initially overcome the strong force which holds the nucleus into a spherical or nearly spherical shape, and from there, deform it into a two-lobed ("peanut") shape in which the lobes are able to continue to separate from each other, pushed by their mutual positive charge, in the most common process of binary fission (two positively charged fission products + neutrons). Once the nuclear lobes have been pushed to a critical distance, beyond which the short range strong force can no longer hold them together, the process of their separation proceeds from the energy of the (longer range) electromagnetic repulsion between the fragments. The result is two fission fragments moving away from each other, at high energy.

About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than one MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when U-238 absorbs slow and even some fraction of fast neutrons, to become U-239. The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of one MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission U-238 directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission U-238 (fission neutrons have a mode energy of 2 MeV, but a median of only 0.75 MeV, meaning half of them have less than this insufficient energy).[4]

Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such as U-235 with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the same element with an even number of neutrons (such as U-238 with 146 neutrons). This extra binding energy is made available as a result of the mechanism of neutron pairing effects. This extra energy results from the Pauli exclusion principle allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus, so that the two form a pair. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast neutron reactors, and in weapons). As noted above, the subgroup of fissionable elements that may be fissioned efficiently with their own fission neutrons (thus potentially causing a nuclear chain reaction in relatively small amounts of the pure material) are termed "fissile." Examples of fissile isotopes are U-235 and plutonium-239.

Output[edit]
Typical fission events release about two hundred million eV (200 MeV) of energy for each fission event. The exact isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. Looking further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding energy of the fission products tends to center around 8.5 MeV per nucleon. Thus, in any fission event of an isotope in the actinide's range of mass, roughly 0.9 MeV is released per nucleon of the starting element. The fission of U235 by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. This energy release profile holds true for thorium and the various minor actinides as well.[5]

By contrast, most chemical oxidation reactions (such as burning coal or TNT) release at most a few eV per event. So, nuclear fuel contains at least ten million times more usable energy per unit mass than does chemical fuel. The energy of nuclear fission is released as kinetic energy of the fission products and fragments, and as electromagnetic radiation in the form of gamma rays; in a nuclear reactor, the energy is converted to heat as the particles and gamma rays collide with the atoms that make up the reactor and its working fluid, usually water or occasionally heavy water or molten salts.

When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus[6] appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.5 MeV), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV).[7] The fission reaction also releases ~7 MeV in prompt gamma ray photons. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to 100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing radiation. However, in nuclear reactors, the fission fragment kinetic energy remains as low-temperature heat, which itself causes little or no ionization.

So-called neutron bombs (enhanced radiation weapons) have been constructed which release a larger fraction of their energy as ionizing radiation (specifically, neutrons), but these are all thermonuclear devices which rely on the nuclear fusion stage to produce the extra radiation. The energy dynamics of pure fission bombs always remain at about 6% yield of the total in radiation, as a prompt result of fission.

The total prompt fission energy amounts to about 181 MeV, or ~ 89% of the total energy which is eventually released by fission over time. The remaining ~ 11% is released in beta decays which have various half-lives, but begin as a process in the fission products immediately; and in delayed gamma emissions associated with these beta decays. For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all). Thus, about 6.5% of the total energy of fission is released some time after the event, as non-prompt or delayed ionizing radiation, and the delayed ionizing energy is about evenly divided between gamma and beta ray energy.

In a reactor that has been operating for some time, the radioactive fission products will have built up to steady state concentrations such that their rate of decay is equal to their rate of formation, so that their fractional total contribution to reactor heat (via beta decay) is the same as these radioisotopic fractional contributions to the energy of fission. Under these conditions, the 6.5% of fission which appears as delayed ionizing radiation (delayed gammas and betas from radioactive fission products) contributes to the steady-state reactor heat production under power. It is this output fraction which remains when the reactor is suddenly shut down (undergoes scram). For this reason, the reactor decay heat output begins at 6.5% of the full reactor steady state fission power, once the reactor is shut down. However, within hours, due to decay of these isotopes, the decay power output is far less. See decay heat for detail.

The remainder of the delayed energy (8.8 MeV/202.5 MeV = 4.3% of total fission energy) is emitted as antineutrinos, which as a practical matter, are not considered "ionizing radiation." The reason is that energy released as antineutrinos is not captured by the reactor material as heat, and escapes directly through all materials (including the Earth) at nearly the speed of light, and into interplanetary space (the amount absorbed is minuscule). Neutrino radiation is ordinarily not classed as ionizing radiation, because it is almost entirely not absorbed and therefore does not produce effects (although the very rare neutrino event is ionizing). Almost all of the rest of the radiation (6.5% delayed beta and gamma radiation) is eventually converted to heat in a reactor core or its shielding.

Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons are captured without producing fissions, they produce heat as well.[8]

Product nuclei and binding energy[edit]
Main articles: fission product and fission product yield
In fission there is a preference to yield fragments with even proton numbers, which is called the odd-even effect on the fragments charge distribution. However, no odd-even effect is observed on fragment mass number distribution. This result is attributed to nucleon pair breaking.

In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 u and the other the remaining 130 to 140 u.[9] Unequal fissions are energetically more favorable because this allows one product to be closer to the energetic minimum near mass 60 u (only a quarter of the average fissionable mass), while the other nucleus with mass 135 u is still not far out of the range of the most tightly bound nuclei (another statement of this, is that the atomic binding energy curve is slightly steeper to the left of mass 120 u than to the right of it).

Origin of the active energy and the curve of binding energy[edit]

The "curve of binding energy": A graph of binding energy per nucleon of common isotopes.
Nuclear fission of heavy elements produces energy because the specific binding energy (binding energy per mass) of intermediate-mass nuclei with atomic numbers and atomic masses close to 62Ni and 56Fe is greater than the nucleon-specific binding energy of very heavy nuclei, so that energy is released when heavy nuclei are broken apart. The total rest masses of the fission products (Mp) from a single reaction is less than the mass of the original fuel nucleus (M). The excess mass Δm = M – Mp is the invariant mass of the energy that is released as photons (gamma rays) and kinetic energy of the fission fragments, according to the mass-energy equivalence formula E = mc2.

The variation in specific binding energy with atomic number is due to the interplay of the two fundamental forces acting on the component nucleons (protons and neutrons) that make up the nucleus. Nuclei are bound by an attractive nuclear force between nucleons, which overcomes the electrostatic repulsion between protons. However, the nuclear force acts only over relatively short ranges (a few nucleon diameters), since it follows an exponentially decaying Yukawa potential which makes it insignificant at longer distances. The electrostatic repulsion is of longer range, since it decays by an inverse-square rule, so that nuclei larger than about 12 nucleons in diameter reach a point that the total electrostatic repulsion overcomes the nuclear force and causes them to be spontaneously unstable. For the same reason, larger nuclei (more than about eight nucleons in diameter) are less tightly bound per unit mass than are smaller nuclei; breaking a large nucleus into two or more intermediate-sized nuclei releases energy. The origin of this energy is the nuclear force, which intermediate-sized nuclei allows to act more efficiently, because each nucleon has more neighbors which are within the short range attraction of this force. Thus less energy is needed in the smaller nuclei and the difference to the state before is set free.

Also because of the short range of the strong binding force, large stable nuclei must contain proportionally more neutrons than do the lightest elements, which are most stable with a 1 to 1 ratio of protons and neutrons. Nuclei which have more than 20 protons cannot be stable unless they have more than an equal number of neutrons. Extra neutrons stabilize heavy elements because they add to strong-force binding (which acts between all nucleons) without adding to proton–proton repulsion. Fission products have, on average, about the same ratio of neutrons and protons as their parent nucleus, and are therefore usually unstable to beta decay (which changes neutrons to protons) because they have proportionally too many neutrons compared to stable isotopes of similar mass.

This tendency for fission product nuclei to beta-decay is the fundamental cause of the problem of radioactive high level waste from nuclear reactors. Fission products tend to be beta emitters, emitting fast-moving electrons to conserve electric charge, as excess neutrons convert to protons in the fission-product atoms. See Fission products (by element) for a description of fission products sorted by element.

Chain reactions[edit]

A schematic nuclear fission chain reaction. 1. A uranium-235 atom absorbs a neutron and fissions into two new atoms (fission fragments), releasing three new neutrons and some binding energy. 2. One of those neutrons is absorbed by an atom of uranium-238 and does not continue the reaction. Another neutron is simply lost and does not collide with anything, also not continuing the reaction. However, one neutron does collide with an atom of uranium-235, which then fissions and releases two neutrons and some binding energy. 3. Both of those neutrons collide with uranium-235 atoms, each of which fissions and releases between one and three neutrons, which can then continue the reaction.
Main article: Nuclear chain reaction
Several heavy elements, such as uranium, thorium, and plutonium, undergo both spontaneous fission, a form of radioactive decay and induced fission, a form of nuclear reaction. Elemental isotopes that undergo induced fission when struck by a free neutron are called fissionable; isotopes that undergo fission when struck by a thermal, slow moving neutron are also called fissile. A few particularly fissile and readily obtainable isotopes (notably 233U, 235U and 239Pu) are called nuclear fuels because they can sustain a chain reaction and can be obtained in large enough quantities to be useful.

All fissionable and fissile isotopes undergo a small amount of spontaneous fission which releases a few free neutrons into any sample of nuclear fuel. Such neutrons would escape rapidly from the fuel and become a free neutron, with a mean lifetime of about 15 minutes before decaying to protons and beta particles. However, neutrons almost invariably impact and are absorbed by other nuclei in the vicinity long before this happens (newly created fission neutrons move at about 7% of the speed of light, and even moderated neutrons move at about 8 times the speed of sound). Some neutrons will impact fuel nuclei and induce further fissions, releasing yet more neutrons. If enough nuclear fuel is assembled in one place, or if the escaping neutrons are sufficiently contained, then these freshly emitted neutrons outnumber the neutrons that escape from the assembly, and a sustained nuclear chain reaction will take place.

An assembly that supports a sustained nuclear chain reaction is called a critical assembly or, if the assembly is almost entirely made of a nuclear fuel, a critical mass. The word "critical" refers to a cusp in the behavior of the differential equation that governs the number of free neutrons present in the fuel: if less than a critical mass is present, then the amount of neutrons is determined by radioactive decay, but if a critical mass or more is present, then the amount of neutrons is controlled instead by the physics of the chain reaction. The actual mass of a critical mass of nuclear fuel depends strongly on the geometry and surrounding materials.

Not all fissionable isotopes can sustain a chain reaction. For example, 238U, the most abundant form of uranium, is fissionable but not fissile: it undergoes induced fission when impacted by an energetic neutron with over 1 MeV of kinetic energy. However, too few of the neutrons produced by 238U fission are energetic enough to induce further fissions in 238U, so no chain reaction is possible with this isotope. Instead, bombarding 238U with slow neutrons causes it to absorb them (becoming 239U) and decay by beta emission to 239Np which then decays again by the same process to 239Pu; that process is used to manufacture 239Pu in breeder reactors. In-situ plutonium production also contributes to the neutron chain reaction in other types of reactors after sufficient plutonium-239 has been produced, since plutonium-239 is also a fissile element which serves as fuel. It is estimated that up to half of the power produced by a standard "non-breeder" reactor is produced by the fission of plutonium-239 produced in place, over the total life-cycle of a fuel load.

Fissionable, non-fissile isotopes can be used as fission energy source even without a chain reaction. Bombarding 238U with fast neutrons induces fissions, releasing energy as long as the external neutron source is present. This is an important effect in all reactors where fast neutrons from the fissile isotope can cause the fission of nearby 238U nuclei, which means that some small part of the 238U is "burned-up" in all nuclear fuels, especially in fast breeder reactors that operate with higher-energy neutrons. That same fast-fission effect is used to augment the energy released by modern thermonuclear weapons, by jacketing the weapon with 238U to react with neutrons released by nuclear fusion at the center of the device.

Fission reactors[edit]

The cooling towers of the Philippsburg Nuclear Power Plant, in Germany.
Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors. Such devices use radioactive decay or particle accelerators to trigger fissions.

Critical fission reactors are built for three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat or the neutrons produced by the fission chain reaction:

power reactors are intended to produce heat for nuclear power, either as part of a generating station or a local power system such as a nuclear submarine.
research reactors are intended to produce neutrons and/or activate radioactive sources for scientific, medical, engineering, or other research purposes.
breeder reactors are intended to produce nuclear fuels in bulk from more abundant isotopes. The better known fast breeder reactor makes 239Pu (a nuclear fuel) from the naturally very abundant 238U (not a nuclear fuel). Thermal breeder reactors previously tested using 232Th to breed the fissile isotope 233U (thorium fuel cycle) continue to be studied and developed.
While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several early counter-examples, such as the Hanford N reactor, now decommissioned). Power reactors generally convert the kinetic energy of fission products into heat, which is used to heat a working fluid and drive a heat engine that generates mechanical or electrical power. The working fluid is usually water with a steam turbine, but some designs use other materials such as gaseous helium. Research reactors produce neutrons that are used in various ways, with the heat of fission being treated as an unavoidable waste product. Breeder reactors are a specialized form of research reactor, with the caveat that the sample being irradiated is usually the fuel itself, a mixture of 238U and 235U. For a more detailed description of the physics and operating principles of critical fission reactors, see nuclear reactor physics. For a description of their social, political, and environmental aspects, see nuclear power.

Fission bombs[edit]

The mushroom cloud of the atom bomb dropped on Nagasaki, Japan in 1945 rose some 18 kilometres (11 mi) above the bomb's hypocenter. The bomb killed at least 60,000 people.[10]
One class of nuclear weapon, a fission bomb (not to be confused with the fusion bomb), otherwise known as an atomic bomb or atom bomb, is a fission reactor designed to liberate as much energy as possible as rapidly as possible, before the released energy causes the reactor to explode (and the chain reaction to stop). Development of nuclear weapons was the motivation behind early research into nuclear fission: the Manhattan Project of the U.S. military during World War II carried out most of the early scientific work on fission chain reactions, culminating in the Trinity test bomb and the Little Boy and Fat Man bombs that were exploded over the cities Hiroshima, and Nagasaki, Japan in August 1945.

Even the first fission bombs were thousands of times more explosive than a comparable mass of chemical explosive. For example, Little Boy weighed a total of about four tons (of which 60 kg was nuclear fuel) and was 11 feet (3.4 m) long; it also yielded an explosion equivalent to about 15 kilotons of TNT, destroying a large part of the city of Hiroshima. Modern nuclear weapons (which include a thermonuclear fusion as well as one or more fission stages) are hundreds of times more energetic for their weight than the first pure fission atomic bombs (see nuclear weapon yield), so that a modern single missile warhead bomb weighing less than 1/8 as much as Little Boy (see for example W88) has a yield of 475,000 tons of TNT, and could bring destruction to about 10 times the city area.

While the fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, the two types of device must be engineered quite differently (see nuclear reactor physics). A nuclear bomb is designed to release all its energy at once, while a reactor is designed to generate a steady supply of useful power. While overheating of a reactor can lead to, and has led to, meltdown and steam explosions, the much lower uranium enrichment makes it impossible for a nuclear reactor to explode with the same destructive power as a nuclear weapon. It is also difficult to extract useful power from a nuclear bomb, although at least one rocket propulsion system, Project Orion, was intended to work by exploding fission bombs behind a massively padded and shielded spacecraft.

The strategic importance of nuclear weapons is a major reason why the technology of nuclear fission is politically sensitive. Viable fission bomb designs are, arguably, within the capabilities of many being relatively simple from an engineering viewpoint. However, the difficulty of obtaining fissile nuclear material to realize the designs, is the key to the relative unavailability of nuclear weapons to all but modern industrialized governments with special programs to produce fissile materials (see uranium enrichment and nuclear fuel cycle).

History[edit]
Discovery of nuclear fission[edit]
The discovery of nuclear fission occurred in 1938 in the buildings of Kaiser Wilhelm Society for Chemistry, today part of the Free University of Berlin, following nearly five decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. In 1911, Ernest Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons (the neutron had not yet been discovered) was surrounded by orbiting, negatively charged electrons (the Rutherford model).[11] Niels Bohr improved upon this in 1913 by reconciling the quantum behavior of electrons (the Bohr model). Work by Henri Becquerel, Marie Curie, Pierre Curie, and Rutherford further elaborated that the nucleus, though tightly bound, could undergo different forms of radioactive decay, and thereby transmute into other elements. (For example, by alpha decay: the emission of an alpha particle—two protons and two neutrons bound together into a particle identical to a helium nucleus.)

Some work in nuclear transmutation had been done. In 1917, Rutherford was able to accomplish transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p.  This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues Ernest Walton and John Cockcroft, who used artificially accelerated protons against lithium-7, to split this nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction later discovered in heavy elements, which is discussed below.[12] Meanwhile, the possibility of combining nuclei—nuclear fusion—had been studied in connection with understanding the processes which power stars. The first artificial fusion reaction had been achieved by Mark Oliphant in 1932, using two accelerated deuterium nuclei (each consisting of a single proton bound to a single neutron) to create a helium nucleus.[13]

After English physicist James Chadwick discovered the neutron in 1932,[14] Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons in 1934.[15] Fermi concluded that his experiments had created new elements with 93 and 94 protons, which the group dubbed ausonium and hesperium. However, not all were convinced by Fermi's analysis of his results. The German chemist Ida Noddack notably suggested in print in 1934 that instead of creating a new, heavier element 93, that "it is conceivable that the nucleus breaks up into several large fragments."[16][17] However, Noddack's conclusion was not pursued at the time.


The experimental apparatus with which Otto Hahn and Fritz Strassmann discovered nuclear fission in 1938
After the Fermi publication, Otto Hahn, Lise Meitner, and Fritz Strassmann began performing similar experiments in Berlin. Meitner, an Austrian Jew, lost her citizenship with the "Anschluss", the occupation and annexation of Austria into Nazi Germany in March 1938, but she fled in July 1938 to Sweden and started a correspondence by mail with Hahn in Berlin. By coincidence, her nephew Otto Robert Frisch, also a refugee, was also in Sweden when Meitner received a letter from Hahn dated 19 December describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. Marie Curie had been separating barium from radium for many years, and the techniques were well-known. According to Frisch:

Was it a mistake? No, said Lise Meitner; Hahn was too good a chemist for that. But how could barium be formed from uranium? No larger fragments than protons or helium nuclei (alpha particles) had ever been chipped away from nuclei, and to chip off a large number not nearly enough energy was available. Nor was it possible that the uranium nucleus could have been cleaved right across. A nucleus was not like a brittle solid that can be cleaved or broken; George Gamow had suggested early on, and Bohr had given good arguments that a nucleus was much more like a liquid drop. Perhaps a drop could divide itself into two smaller drops in a more gradual manner, by first becoming elongated, then constricted, and finally being torn rather than broken in two? We knew that there were strong forces that would resist such a process, just as the surface tension of an ordinary liquid drop tends to resist its division into two smaller ones. But nuclei differed from ordinary drops in one important way: they were electrically charged, and that was known to counteract the surface tension.

The charge of a uranium nucleus, we found, was indeed large enough to overcome the effect of the surface tension almost completely; so the uranium nucleus might indeed resemble a very wobbly unstable drop, ready to divide itself at the slightest provocation, such as the impact of a single neutron. But there was another problem. After separation, the two drops would be driven apart by their mutual electric repulsion and would acquire high speed and hence a very large energy, about 200 MeV in all; where could that energy come from? ...Lise Meitner... worked out that the two nuclei formed by the division of a uranium nucleus together would be lighter than the original uranium nucleus by about one-fifth the mass of a proton. Now whenever mass disappears energy is created, according to Einstein's formula E=mc2, and one-fifth of a proton mass was just equivalent to 200MeV. So here was the source for that energy; it all fitted![18]

In short, Meitner and Frisch had correctly interpreted Hahn's results to mean that the nucleus of uranium had split roughly in half. Frisch suggested the process be named "nuclear fission," by analogy to the process of living cell division into two cells, which was then called binary fission. Just as the term nuclear "chain reaction" would later be borrowed from chemistry, so the term "fission" was borrowed from biology.

On 22 December 1938, Hahn and Strassmann sent a manuscript to Naturwissenschaften reporting that they had discovered the element barium after bombarding uranium with neutrons.[19] Simultaneously, they communicated these results to Meitner in Sweden. She and Frisch correctly interpreted the results as evidence of nuclear fission.[20] Frisch confirmed this experimentally on 13 January 1939.[21][22] For proving that the barium resulting from his bombardment of uranium with neutrons was the product of nuclear fission, Hahn was awarded the Nobel Prize for Chemistry in 1944 (the sole recipient) "for his discovery of the fission of heavy nuclei". (The award was actually given to Hahn in 1945, as "the Nobel Committee for Chemistry decided that none of the year's nominations met the criteria as outlined in the will of Alfred Nobel." In such cases, the Nobel Foundation's statutes permit that year's prize be reserved until the following year.)[23]


German stamp honoring Otto Hahn and his discovery of nuclear fission (1979)
News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great scientific—and potentially practical—possibilities. Meitner’s and Frisch’s interpretation of the discovery of Hahn and Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University. I.I. Rabi and Willis Lamb, two Columbia University physicists working at Princeton, heard the news and carried it back to Columbia. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. Bohr soon thereafter went from Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and found Herbert L. Anderson. Bohr grabbed him by the shoulder and said: “Young man, let me explain to you about something new and exciting in physics.”[24] It was clear to a number of scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University team conducted the first nuclear fission experiment in the United States,[25] which was done in the basement of Pupin Hall; the members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, Enrico Fermi, G. Norris Glasoe, and Francis G. Slack. The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular that was fissioning. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of the George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations.[26]

During this period the Hungarian physicist Leó Szilárd, who was residing in the United States at the time, realized that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. Such a reaction using neutrons was an idea he had first formulated in 1933, upon reading Rutherford's disparaging remarks about generating power from his team's 1932 experiment using protons to split lithium. However, Szilárd had not been able to achieve a neutron-driven chain reaction with neutron-rich light atoms. In theory, if in a neutron-driven chain reaction the number of secondary neutrons produced was greater than one, then each such reaction could trigger multiple additional reactions, producing an exponentially increasing number of reactions. It was thus a possibility that the fission of uranium could yield vast amounts of energy for civilian or military purposes (i.e., electric power generation or atomic bombs).

Szilard now urged Fermi (in New York) and Frédéric Joliot-Curie (in Paris) to refrain from publishing on the possibility of a chain reaction, lest the Nazi government become aware of the possibilities on the eve of what would later be known as World War II. With some hesitation Fermi agreed to self-censor. But Joliot-Curie did not, and in April 1939 his team in Paris, including Hans von Halban and Lew Kowarski, reported in the journal Nature that the number of neutrons emitted with nuclear fission of 235U was then reported at 3.5 per fission.[27] (They later corrected this to 2.6 per fission.) Simultaneous work by Szilard and Walter Zinn confirmed these results. The results suggested the possibility of building nuclear reactors (first called "neutronic reactors" by Szilard and Fermi) and even nuclear bombs. However, much was still unknown about fission and chain reaction systems.

Fission chain reaction realized[edit]

Drawing of the first artificial reactor, Chicago Pile-1.
"Chain reactions" at that time were a known phenomenon in chemistry, but the analogous process in nuclear physics, using neutrons, had been foreseen as early as 1933 by Szilárd, although Szilárd at that time had no idea with what materials the process might be initiated. Szilárd considered that neutrons would be ideal for such a situation, since they lacked an electrostatic charge.

With the news of fission neutrons from uranium fission, Szilárd immediately understood the possibility of a nuclear chain reaction using uranium. In the summer, Fermi and Szilard proposed the idea of a nuclear reactor (pile) to mediate this process. The pile would use natural uranium as fuel. Fermi had shown much earlier that neutrons were far more effectively captured by atoms if they were of low energy (so-called "slow" or "thermal" neutrons), because for quantum reasons it made the atoms look like much larger targets to the neutrons. Thus to slow down the secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator," against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. With enough uranium, and with pure-enough graphite, their "pile" could theoretically sustain a slow-neutron chain reaction. This would result in the production of heat, as well as the creation of radioactive fission products.

In August 1939, Szilard and fellow Hungarian refugees physicists Teller and Wigner thought that the Germans might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United States government to the issue. Towards this, they persuaded German-Jewish refugee Albert Einstein to lend his name to a letter directed to President Franklin Roosevelt. The Einstein–Szilárd letter suggested the possibility of a uranium bomb deliverable by ship, which would destroy "an entire harbor and much of the surrounding countryside." The President received the letter on 11 October 1939 — shortly after World War II began in Europe, but two years before U.S. entry into it. Roosevelt ordered that a scientific committee be authorized for overseeing uranium work and allocated a small sum of money for pile research.

In England, James Chadwick proposed an atomic bomb utilizing natural uranium, based on a paper by Rudolf Peierls with the mass needed for critical state being 30–40 tons. In America, J. Robert Oppenheimer thought that a cube of uranium deuteride 10 cm on a side (about 11 kg of uranium) might "blow itself to hell." In this design it was still thought that a moderator would need to be used for nuclear bomb fission (this turned out not to be the case if the fissile isotope was separated). In December, Werner Heisenberg delivered a report to the German Ministry of War on the possibility of a uranium bomb. Most of these models were still under the assumption that the bombs would be powered by slow neutron reactions—and thus be similar to a reactor undergoing a meltdown.

In Birmingham, England, Frisch teamed up with Peierls, a fellow German-Jewish refugee. They had the idea of using a purified mass of the uranium isotope 235U, which had a cross section just determined, and which was much larger than that of 238U or natural uranium (which is 99.3% the latter isotope). Assuming that the cross section for fast-neutron fission of 235U was the same as for slow neutron fission, they determined that a pure 235U bomb could have a critical mass of only 6 kg instead of tons, and that the resulting explosion would be tremendous. (The amount actually turned out to be 15 kg, although several times this amount was used in the actual uranium (Little Boy) bomb). In February 1940 they delivered the Frisch–Peierls memorandum. Ironically, they were still officially considered "enemy aliens" at the time. Glenn Seaborg, Joseph W. Kennedy, Arthur Wahl and Italian-Jewish refugee Emilio Segrè shortly thereafter discovered 239Pu in the decay products of 239U produced by bombarding 238U with neutrons, and determined it to be a fissile material, like 235U.

The possibility of isolating uranium-235 was technically daunting, because uranium-235 and uranium-238 are chemically identical, and vary in their mass by only the weight of three neutrons. However, if a sufficient quantity of uranium-235 could be isolated, it would allow for a fast neutron fission chain reaction. This would be extremely explosive, a true "atomic bomb." The discovery that plutonium-239 could be produced in a nuclear reactor pointed towards another approach to a fast neutron fission bomb. Both approaches were extremely novel and not yet well understood, and there was considerable scientific skepticism at the idea that they could be developed in a short amount of time.

On June 28, 1941, the Office of Scientific Research and Development was formed in the U.S. to mobilize scientific resources and apply the results of research to national defense. In September, Fermi assembled his first nuclear "pile" or reactor, in an attempt to create a slow neutron-induced chain reaction in uranium, but the experiment failed to achieve criticality, due to lack of proper materials, or not enough of the proper materials which were available.

Producing a fission chain reaction in natural uranium fuel was found to be far from trivial. Early nuclear reactors did not use isotopically enriched uranium, and in consequence they were required to use large quantities of highly purified graphite as neutron moderation materials. Use of ordinary water (as opposed to heavy water) in nuclear reactors requires enriched fuel — the partial separation and relative enrichment of the rare 235U isotope from the far more common 238U isotope. Typically, reactors also require inclusion of extremely chemically pure neutron moderator materials such as deuterium (in heavy water), helium, beryllium, or carbon, the latter usually as graphite. (The high purity for carbon is required because many chemical impurities such as the boron-10 component of natural boron, are very strong neutron absorbers and thus poison the chain reaction and end it prematurely.)

Production of such materials at industrial scale had to be solved for nuclear power generation and weapons production to be accomplished. Up to 1940, the total amount of uranium metal produced in the USA was not more than a few grams, and even this was of doubtful purity; of metallic beryllium not more than a few kilograms; and concentrated deuterium oxide (heavy water) not more than a few kilograms. Finally, carbon had never been produced in quantity with anything like the purity required of a moderator.

The problem of producing large amounts of high purity uranium was solved by Frank Spedding using the thermite or "Ames" process. Ames Laboratory was established in 1942 to produce the large amounts of natural (unenriched) uranium metal that would be necessary for the research to come. The critical nuclear chain-reaction success of the Chicago Pile-1 (December 2, 1942) which used unenriched (natural) uranium, like all of the atomic "piles" which produced the plutonium for the atomic bomb, was also due specifically to Szilard's realization that very pure graphite could be used for the moderator of even natural uranium "piles". In wartime Germany, failure to appreciate the qualities of very pure graphite led to reactor designs dependent on heavy water, which in turn was denied the Germans by Allied attacks in Norway, where heavy water was produced. These difficulties—among many others— prevented the Nazis from building a nuclear reactor capable of criticality during the war, although they never put as much effort as the United States into nuclear research, focusing on other technologies (see German nuclear energy project for more details).

Manhattan Project and beyond[edit]
See also: Manhattan Project
In the United States, an all-out effort for making atomic weapons was begun in late 1942. This work was taken over by the U.S. Army Corps of Engineers in 1943, and known as the Manhattan Engineer District. The top-secret Manhattan Project, as it was colloquially known, was led by General Leslie R. Groves. Among the project's dozens of sites were: Hanford Site in Washington state, which had the first industrial-scale nuclear reactors; Oak Ridge, Tennessee, which was primarily concerned with uranium enrichment; and Los Alamos, in New Mexico, which was the scientific hub for research on bomb development and design. Other sites, notably the Berkeley Radiation Laboratory and the Metallurgical Laboratory at the University of Chicago, played important contributing roles. Overall scientific direction of the project was managed by the physicist J. Robert Oppenheimer.

In July 1945, the first atomic bomb, dubbed "Trinity", was detonated in the New Mexico desert. It was fueled by plutonium created at Hanford. In August 1945, two more atomic bombs—"Little Boy", a uranium-235 bomb, and "Fat Man", a plutonium bomb—were used against the Japanese cities of Hiroshima and Nagasaki.

In the years after World War II, many countries were involved in the further development of nuclear fission for the purposes of nuclear reactors and nuclear weapons. The UK opened the first commercial nuclear power plant in 1956. In 2013, there are 437 reactors in 31 countries.

Natural fission chain-reactors on Earth[edit]
Criticality in nature is uncommon. At three ore deposits at Oklo in Gabon, sixteen sites (the so-called Oklo Fossil Reactors) have been discovered at which self-sustaining nuclear fission took place approximately 2 billion years ago. Unknown until 1972 (but postulated by Paul Kuroda in 1956[28]), when French physicist Francis Perrin discovered the Oklo Fossil Reactors, it was realized that nature had beaten humans to the punch. Large-scale natural uranium fission chain reactions, moderated by normal water, had occurred far in the past and would not be possible now. This ancient process was able to use normal water as a moderator only because 2 billion years before the present, natural uranium was richer in the shorter-lived fissile isotope 235U (about 3%), than natural uranium available today (which is only 0.7%, and must be enriched to 3% to be usable in light-water reactors).


Yu | Mythic Inconceivable!
 
more |
XBL:
PSN:
Steam:
ID: Yutaka
IP: Logged

12,707 posts
Almost always, with moderation
This fucking thread.


 
Naru
| The Tide Caller
 
more |
XBL: Naru No Baka
PSN:
Steam: The Tide Caller
ID: GasaiYuno
IP: Logged

18,501 posts
The Rage....
So much knowledge packed in thready goodness

:^)