i Phone

Posted: January 16, 2014 in iPhone
Tags:

The iPhone (/ˈfn/ eye-fohn) is a line of smartphones designed and marketed by Apple Inc. It runs Apple’s iOS mobile operating system.[14] Thefirst generation iPhone was released on June 29, 2007; the most recent iPhones, the seventh-generation iPhone 5C and iPhone 5S, were introduced on September 10, 2013.

The user interface is built around the device’s multi-touch screen, including a virtual keyboard. The iPhone has Wi-Fi and cellular connectivity (2G,3G4G, and LTE). An iPhone can shoot video (though this was not a standard feature until the iPhone 3GS), take photosplay music, send and receive email, browse the web, send texts, and receive visual voicemail. Other functions — video games, reference works, GPS navigation, social networking, etc. — can be enabled by downloading application programs (‘apps’); as of October 2013, the App Store offered more than one million apps by Apple and third parties.[15]

There are seven generations of iPhone models, each accompanied by one of the six major releases of iOS. The original 1st-generation iPhone was aGSM phone and established design precedents, such as a button placement that has persisted through all models and a screen size maintained for the next four iterations. The iPhone 3G added 3G cellular network capabilities and A-GPS location. The iPhone 3GS added a faster processor and a higher-resolution camera that could record video at 480p. The iPhone 4 featured a higher-resolution 960×640 “Retina Display“, a VGA front-facing camera for video calling and other apps, and a 5-megapixel rear-facing camera with 720p video capture.[16] The iPhone 4S upgrades to an 8-megapixel camera with 1080p video recording, a dual-core A5 processor, and a natural language voice control system called Siri.[17] iPhone 5features the dual-core A6 processor, increases the size of the Retina display to 4 inches, and replaces the 30-pin connector with an all-digitalLightning connector. The iPhone 5S features the dual-core 64-bit A7 processor, an updated camera with a larger aperture and dual-LED flash, and the Touch ID fingerprint scanner, integrated into the home button. iPhone 5C features the same A6 chip as the iPhone 5, along with a new backside-illuminated FaceTime camera and a new casing made of polycarbonate. As of 2013, the iPhone 3GS had the longest production run, 1181 days; followed by the iPhone 4, produced for 1174 days.[18]

The resounding sales of the iPhone have been credited with reshaping the smartphone industry and helping make Apple one of the world’s most valuable publicly traded companies in 2011–12.[19] The iPhone is the top-selling phone of any kind in some countries, including the United States[20]and Japan.[21]

This article is about the line of smartphones by Apple. For other uses, see iPhone (disambiguation).
Page semi-protected
iPhone
IPhone 5s.png IPhone 5C.png

The 5S (left) and 5C (right) to scale
Developer Apple Inc.
Manufacturer Foxconn (on contract)
Type Smartphone
Release date
  • 1st gen: June 29, 2007
  • 3G: July 11, 2008
  • 3GS: June 19, 2009
  • 4: June 24, 2010
  • 4S: October 14, 2011
  • 5: September 21, 2012
  • 5C and 5S: September 20, 2013
Units sold 250 million[1]
Operating system iOS
Power
  • Built-in rechargeable Li-Po battery
  • 1st gen:3.7 V 5.18 W·h (1400 mA·h)
  • 3G: 3.7 V 4.12 W·h(1150 mA·h)
  • 3GS 3.7 V 4.51 W·h(1219 mA·h)
  • 4: 3.7 V 5.25 W·h (1420 mA·h)
  • 4S: 3.7 V 5.3 W·h (1432 mA·h)
  • 5: 3.8 V 5.45 W·h (1440 mA·h)
  • 5S: 3.8 V 5.92 W·h(1560 mA·h)
System-on-chip used
CPU
Memory
  • 1st gen and 3G:
  • 128 MB LPDDR DRAM(137 MHz)
  • 3GS: 256 MB LPDDR DRAM (200 MHz)
  • 4: 512 MB LPDDR2 DRAM (200 MHz)
  • 4S: 512 MB LPDDR2 DRAM
  • 5: 1 GB LPDDR2 DRAM
  • 5S: 1 GB LPDDR3 DRAM
Storage 4, 8, 16, 32, or 64 GB flash memory[6]
Display
  • 1st gen and 3G:
  • 3.5 in (89 mm)
  • 3:2 aspect ratio, scratch-resistant[7] glossy glass covered screen, 262,144-color (18-bitTN LCD, 480×320 px(HVGA) at 163 ppi, 200:1contrast ratio
  • 3GS:
  • In addition to prior, features a fingerprint-resistant oleophobiccoating,[8] and 262,144-color (18-bit) TN LCD with hardware spatial dithering[9]
  • 4 and 4S:
  • 3.5 in (89 mm), 3:2 aspect ratio,aluminosilicate glass covered 16,777,216-color (24-bit) IPSLCD screen, 960×640 px at 326 ppi, 800:1 contrast ratio, 500 cd max brightness
  • 5:
  • 4.0 in (100 mm);16:9 aspect ratio;1136 x 640 px screen resolution at 326 ppi
Graphics
Connectivity
Wi-Fi 

802.11 a/b/g/n

1st gen3G3GS, and 4:
Bluetooth 2.1 + EDR
4S55C, and 5S:
Bluetooth 4.0

GSM models also include
UMTS / HSDPA 

850, 1900, 2100 MHz
GSM / EDGE 

850, 900, 1800, 1900 MHz
CDMA model also includes
CDMA/EV-DO Rev. A 

800, 1900 MHz

5:

GSM models also include
LTE 

700, 2100 MHz
UMTS /HSDPA/HSPA+ / DC-HSDPA

850, 900, 1900, 2100 MHz
GSM / EDGE 

850, 900, 1800, 1900 MHz
CDMA model also includes
LTE 

700 MHz
CDMA/EV-DO Rev. A 

800, 1900 MHz
UMTSHSDPA/HSPA+/DC-HSDPA

850, 900, 1900, 2100 MHz
GSM / EDGE 

850, 900, 1800, 1900 MHz
Online services
Dimensions
  • 1st gen:
  • 115 mm (4.5 in) H
  • 61 mm (2.4 in) W
  • 11.6 mm (0.46 in) D
  • 3G and 3GS:
  • 115.5 mm (4.55 in) H
  • 62.1 mm (2.44 in) W
  • 12.3 mm (0.48 in) D
  • 4 and 4S:
  • 115.2 mm (4.54 in) H
  • 58.6 mm (2.31 in) W
  • 9.3 mm (0.37 in) D
  • 5:
  • 123.8 mm (4.87 in) H
  • 58.6 mm (2.31 in) W
  • 7.6 mm (0.30 in) D
Weight
  • 1st gen and 3GS:
  • 135 g (4.8 oz)
  • 3G: 133 g (4.7 oz)
  • 4: 137 g (4.8 oz)
  • 4S: 140 g (4.9 oz)
  • 5: 112 g (4.0 oz)
Related articles
Website www.apple.com/iphone
This article is part of a series on the
iPhone
List of iPhone models

The iPhone (/ˈfn/ eye-fohn) is a line of smartphones designed and marketed by Apple Inc. It runs Apple’s iOS mobile operating system.[14] Thefirst generation iPhone was released on June 29, 2007; the most recent iPhones, the seventh-generation iPhone 5C and iPhone 5S, were introduced on September 10, 2013.

The user interface is built around the device’s multi-touch screen, including a virtual keyboard. The iPhone has Wi-Fi and cellular connectivity (2G,3G4G, and LTE). An iPhone can shoot video (though this was not a standard feature until the iPhone 3GS), take photosplay music, send and receive email, browse the web, send texts, and receive visual voicemail. Other functions — video games, reference works, GPS navigation, social networking, etc. — can be enabled by downloading application programs (‘apps’); as of October 2013, the App Store offered more than one million apps by Apple and third parties.[15]

There are seven generations of iPhone models, each accompanied by one of the six major releases of iOS. The original 1st-generation iPhone was aGSM phone and established design precedents, such as a button placement that has persisted through all models and a screen size maintained for the next four iterations. The iPhone 3G added 3G cellular network capabilities and A-GPS location. The iPhone 3GS added a faster processor and a higher-resolution camera that could record video at 480p. The iPhone 4 featured a higher-resolution 960×640 “Retina Display“, a VGA front-facing camera for video calling and other apps, and a 5-megapixel rear-facing camera with 720p video capture.[16] The iPhone 4S upgrades to an 8-megapixel camera with 1080p video recording, a dual-core A5 processor, and a natural language voice control system called Siri.[17] iPhone 5features the dual-core A6 processor, increases the size of the Retina display to 4 inches, and replaces the 30-pin connector with an all-digitalLightning connector. The iPhone 5S features the dual-core 64-bit A7 processor, an updated camera with a larger aperture and dual-LED flash, and the Touch ID fingerprint scanner, integrated into the home button. iPhone 5C features the same A6 chip as the iPhone 5, along with a new backside-illuminated FaceTime camera and a new casing made of polycarbonate. As of 2013, the iPhone 3GS had the longest production run, 1181 days; followed by the iPhone 4, produced for 1174 days.[18]

The resounding sales of the iPhone have been credited with reshaping the smartphone industry and helping make Apple one of the world’s most valuable publicly traded companies in 2011–12.[19] The iPhone is the top-selling phone of any kind in some countries, including the United States[20]and Japan.[21]

IPhone 5s.png IPhone 5C.png

History and availability

Main article: History of the iPhone

Development of what was to become the iPhone began in 2004, when Apple started to gather a team of 1000 employees to work on the highly confidential “Project Purple”,[22] including Sir Jonathan Ive, the designer behind the iPhone.[23] Apple CEO Steve Jobs steered the original focus away from a tablet, like the iPad, and towards a phone.[24] Apple created the device during a secretive collaboration with AT&T Mobility—Cingular Wireless at the time—at an estimated development cost of US$150 million over thirty months.[25]

Apple rejected the “design by committee” approach that had yielded the Motorola ROKR E1, a largely unsuccessful[26] collaboration with Motorola. Instead, Cingular gave Apple the liberty to develop the iPhone’s hardware and software in-house[27][28] and even paid Apple a fraction of its monthly service revenue (until the iPhone 3G),[29] in exchange for four years of exclusive US sales, until 2011.

Jobs unveiled the iPhone to the public on January 9, 2007, at the Macworld 2007 convention at the Moscone Center in San Francisco.[30] The two initial models, a 4 GB model priced at US$ 499 and an 8 GB model at US$ 599, went on sale in the United States on June 29, 2007, at 6:00 pm local time, while hundreds of customers lined up outside the stores nationwide.[31] The passionate reaction to the launch of the iPhone resulted in sections of the media dubbing it the ‘Jesus phone’.[32][33] The first generation iPhone was made available in the UK, France, and Germany in November 2007, and Ireland and Austria in the spring of 2008.

Worldwide iPhone availability:

  iPhone was available since its original release
  iPhone was available since the release of iPhone 3G
  Coming soon

On July 11, 2008, Apple released the iPhone 3G in twenty-two countries, including the original six.[34] Apple released the iPhone 3G in upwards of eighty countries and territories.[35] Apple announced the iPhone 3GS on June 8, 2009, along with plans to release it later in June, July, and August, starting with the US, Canada and major European countries on June 19. Many would-be users objected to the iPhone’s cost,[36] and 40% of users have household incomes over US$100,000.[37]

The back of the original first generation iPhone was made of aluminum with a black plastic accent. The iPhone 3G and 3GS feature a full plastic back to increase the strength of the GSM signal.[38] The iPhone 3G was available in an 8 GB black model, or a black or white option for the 16 GB model. The iPhone 3GS was available in both colors, regardless of storage capacity.

The iPhone 4 has an aluminosilicate glass front and back with a stainless steel edge that serves as the antennas. It was at first available in black; the white version was announced, but not released until April 2011, 10 months later.

The iPhone has garnered positive reviews from such critics as David Pogue[39] and Walt Mossberg.[40][41] The iPhone attracts users of all ages,[37]and besides consumer use, the iPhone has also been adopted for business purposes.[42]

Users of the iPhone 4 reported dropped/disconnected telephone calls when holding their phones in a certain way. This became known asantennagate.[43]

On January 11, 2011, Verizon announced during a media event that it had reached an agreement with Apple and would begin selling a CDMA2000iPhone 4. Verizon said it would be available for pre-order on February 3, with a release set for February 10.[44][45] In February 2011, the Verizon iPhone accounted for 4.5% of all iPhone ad impressions[vague] in the US on Millennial Media’s mobile ad network.[46]

From 2007 to 2011, Apple spent $647 million on advertising for the iPhone in the US.[22]

On Tuesday, September 27, Apple sent invitations for a press event to be held October 4, 2011, at 10:00 am at the Cupertino Headquarters to announce details of the next generation iPhone, which turned out to be iPhone 4S. Over 1 million 4S models were sold in the first 24 hours after its release in October 2011.[47] Due to large volumes of the iPhone being manufactured and its high selling price, Apple became the largest mobile handset vendor in the world by revenue, in 2011, surpassing long-time leader Nokia.[48] American carrier C Spire Wireless announced that it would be carrying the iPhone 4S on October 19, 2011.[49]

In January 2012, Apple reported its best quarterly earnings ever, with 53% of its revenue coming from the sale of 37 million iPhones, at an average selling price of nearly $660. The average selling price has remained fairly constant for most of the phone’s lifespan, hovering between $622 and $660.[50] The production price of the iPhone 4S was estimated by IHS iSuppli, in October 2011, to be $188, $207 and $245, for the 16 GB, 32 GB and 64 GB models, respectively.[51] Labor costs are estimated at between $12.5 and $30 per unit, with workers on the iPhone assembly line making $1.78 an hour.[52]

In February 2012, ComScore reported that 12.4% of US mobile subscribers use an iPhone.[53] Approximately 6.4 million iPhones are active in the US alone.[37]

On September 12, 2012, Apple announced the iPhone 5. It has a 4-inch display, up from its predecessors’ 3.5-inch screen. The device comes with the same 326 pixels per inch found in the iPhone 4 and 4S. The iPhone 5 has the soc A6 processor, the chip is 22% smaller than the iPhone 4S’ A5 and is twice as fast, doubling the graphics performance of its predecessor. The device is 18% thinner than the iPhone 4S, measuring 7.6 mm, and is 20% lighter at 112 grams.

On July 6, 2013, it was reported that Apple was in talks with Korean mobile carrier, SK Telecom, to release the next generation iPhone with LTE Advanced technology.[54]

On July 22, 2013 the company’s suppliers said that Apple is testing out larger screens for the iPhone and iPad. “Apple has asked for prototype smartphone screens larger than 4 inches and has also asked for screen designs for a new tablet device measuring slightly less than 13 inches diagonally, they said.”[55]

On September 10, 2013, Apple unveiled two new iPhone models during a highly anticipated press event in CupertinoCalifornia, U.S. The iPhone 5C, a mid-range-priced version of the handset that is designed to increase accessibility due its price, is available in five colors (green, blue, yellow, pink, and white) and is made of plastic. The iPhone 5S comes in three colors (black, white, and gold) and the home button is replaced with a fingerprint scanner. Both phones shipped on September 20, 2013.[56]

Sales and profits

For additional sales information, see the table of quarterly sales.

Before the release of the iPhone, handset manufacturers such as Nokia and Motorola were enjoying record sales of cell phones based more on fashion and brand rather than technological innovation.[57] The smartphone market, dominated at the time by BlackBerry OS and Windows Mobiledevices, was a “staid, corporate-led smartphone paradigm” focused on enterprise needs. However with its capacitive touchscreen and consumer-friendly design, the iPhone fundamentally changed the mobile industry, with Steve Jobs proclaiming in 2007 that “the phone was not just a communication tool but a way of life”.[58] The dominant mobile operating systems at the time such as SymbianBlackBerry OS, and Windows Mobile were not designed to handle additional tasks beyond communication and basic functions; iPhone OS (renamed iOS in 2010) was designed as a robust OS with capabilities such as multitasking and graphics in order to meet future consumer demands.[59] These operating systems never focused on applications and developers, and due to infighting among manufacturers as well as the complex bureaucracy and bloatness of the OS, they never developed a thriving ecosystem like Apple’s App Store or Android‘s Google Play.[58][60] Rival manufacturers have been forced to spend more on software and development costs in order to catch up to the iPhone. The iPhone’s success has led to a decline in sales of high-end fashion phones and business-oriented smartphones such as Vertu and BlackBerry, respectively.[58][61]

Apple sold 6.1 million first generation iPhone units over five quarters.[62] Sales in Q4 2008 surpassed temporarily those of Research In Motion‘s (RIM)BlackBerry sales of 5.2 million units, which made Apple briefly the third largest mobile phone manufacturer by revenue, after Nokia and Samsung[63](it must be noted that some of this income is deferred[64]). Recorded sales grew steadily thereafter, and by the end of fiscal year 2010, a total of 73.5 million iPhones were sold.[65]

By 2010, the iPhone had a market share of barely 4% of all cellphones, however Apple pulled in more than 50% of the total profit that global cellphone sales generate.[66] Apple sold 14.1 million iPhones in Q3 2010, representing a 91% unit growth over the year-ago quarter, which was well ahead of IDC’s latest published estimate of 64% growth for the global smartphone market in the September quarter. Apple’s sales surpassed that of Research in Motion’s 12.1 million BlackBerry units sold in their most recent quarter ended August 2010.[2] In the United States market alone for Q3 2010, while there were 9.1 million Android-powered smartphones shipped for 43.6% of the market, Apple iOS was the number two phone operating system with 26.2% but the 5.5 million iPhones sold made it the most popular single device.[67]

On March 2, 2011, at the iPad 2 launch event, Apple announced that they had sold 100 million iPhones worldwide.[68] As a result of the success of the iPhone sales volume and high selling price, headlined by the iPhone 4S, Apple became the largest mobile handset vendor in the world by revenue in 2011, surpassing long-time leader Nokia.[48] While the Samsung Galaxy S II has proven more popular than the iPhone 4S in parts of Europe, the iPhone 4S is dominant in the United States.[69]

In January 2012, Apple reported its best quarterly earnings ever, with 53% of its revenue coming from the sale of 37 million iPhones, at an average selling price of nearly $660. The average selling price has remained fairly constant for most of the phones lifespan, hovering between $622 and $660.[50]

For the eight largest phone manufacturers in Q1 2012, according to Horace Dediu at Asymco, Apple and Samsung combined to take 99% of industry profits (HTC took the remaining 1%, while RIM, LG, Sony Ericsson, Motorola, and Nokia all suffered losses), with Apple earning 73 cents out of every dollar earned by the phone makers. As the industry profits grew from $5.3 billion in Q1, 2010 to $14.4 billion in Q1, 2012 (quadruple the profits in 2007),[70][71] Apple had managed to increase its share of these profits. This is due to increasing carrier subsidies and the high selling prices of the iPhone, which had a negative effect on the wireless carriers (AT&T Mobility, Verizon, and Sprint) who have seen their EBITDA service margins drop as they sold an increasing number of iPhones.[72][73][74] By the quarter ended March 31, 2012, Apple’s sales from the iPhone alone (at $22.7 billion) exceeded the total of Microsoft from all of its businesses ($17.4 billion).[75]

In Q4 2012, the iPhone 5 and iPhone 4S were the best-selling handsets with sales of 27.4 million (13% of smartphones worldwide) and 17.4 million units, respectively, with the Samsung Galaxy S III in third with 15.4 million. According to Strategy Analytics’ data, this was an ”an impressive performance, given the iPhone portfolio’s premium pricing”, adding that the Galaxy SIII’s global popularity “appears to have peaked” (the Galaxy S III was touted as an iPhone-killer by some in the press when it was released[76][77]). While Samsung has led in worldwide sales of smartphones, Apple’s iPhone line has still managed to top Samsung’s smartphone offerings in the United States,[78] with 21.4% share and 37.8% in that market, respectively. iOS grew 3.5% to a 37.8%, while Android slid 1.3% to fall to 52.3% share.[79]

The continued top popularity of the iPhone despite growing Android competition was also attributed to Apple being able to deliver iOS updates over the air, while Android updates are frequently impeded by carrier testing requirements and hardware tailoring, forcing consumers to purchase new Android smartphone in order to get the latest version of that OS.[80] However by 2013 Apple’s market share had fallen to 13.1%, due to the surging popularity of the Android offerings, and as the iPhone does not compete in the feature phone or prepaid segments.[81]

Apple announced on September 1, 2013, that its iPhone trade-in program would be implemented at all of its 250 specialty stores in the US. For the program to become available, customers must have a valid contract and must purchase a new phone, rather than simply receive credit to be used at a later date. A significant part of the program’s goal is to increase the number of customers who purchase iPhones at Apple stores rather than carrier stores.[82]

On September 20, 2013, the sales date of the iPhone 5s and 5c models, the longest ever queue was observed at the New York City, US flagship Apple store, in addition to prominent queues in San Francisco, US and Canada; however, locations throughout the world were identified for the anticipation of corresponding consumers.[83] Apple also increased production of the gold-colored iPhone 5S by an additional one-third due to the particularly strong demand that emerged.[84]

Apple released its opening weekend sales results for the 5c and 5s models, showing an all-time high for the product’s sales figures, with 9 million handsets sold—the previous record was set in 2012, when 5 million handsets were sold during the opening weekend of the 5 model. This was the first time that Apple has simultaneously launched two models and the inclusion of China in the list of markets contributed to the record sales result.[85] Apple also announced that, as of September 23, 2013, 200 million devices were running the iOS 7 update, making it the “fastest software upgrade in history.”[86]

An Apple Store located at the Christiana Mall in Newark, Delaware, US claimed the highest iPhones sales figures in November 2013. The store’s high sales results are due to the absence of asales tax in the state of Delaware.[87]

The finalization of a deal between Apple and China Mobile, the world’s largest mobile network, was announced in late December 2013. The multi-year agreement provides iPhone access to over 760 million China Mobile subscribers.[88]

Hardware

Screen and input

The touchscreen on the first five generations is a 9 cm (3.5 in) liquid crystal display with scratch-resistant glass, while the one on the iPhone 5 is 4 inches.[7] The capacitive touchscreen is designed for a bare finger, or multiple fingers for multi-touch sensing. The screens on the first three generations have a resolution of 320×480 (HVGA) at 163 ppi; those on the iPhone 4 andiPhone 4S have a resolution of 640×960 at 326 ppi, and the iPhone 5, 640×1136 at 326 ppi. The iPhone 5 model’s screen results in an aspect ratio of nearly exactly 16:9.

The touch and gesture features of the iPhone are based on technology originally developed by FingerWorks.[89] Most gloves and styli prevent the necessary electrical conductivity;[90][91][92][93]although capacitive styli can be used with iPhone’s finger-touch screen. The iPhone 3GS and later also feature a fingerprint-resistant oleophobic coating.[94]

The top and side of an iPhone 3GS, externally identical to the iPhone 3G. From left to right, sides: wake/sleep button, SIM card slot, headphone jack, silence switch, volume controls. The switches were black plastic on the first generation iPhone. Top: earpiece, screen.

The iPhone has a minimal hardware user interface, featuring five buttons. The only physical menu button is situated directly below the display, and is called the “Home button” because it closes the active app and navigates to the home screen of the interface. The home button is denoted not by a house, as on many other similar devices, but a rounded square, reminiscent of the shape of icons on the home screen.

A multifunction sleep/wake button is located on the top of the device. It serves as the unit’s power button, and also controls phone calls. When a call is received, pressing the sleep/wake button once silences the ringtone, and when pressed twice transfers the call to voicemail. Situated on the left spine are the volume adjustment controls. The iPhone 4 has two separate circular buttons to increase and decrease the volume; all earlier models house two switches under a single plastic panel, known as a rocker switch, which could reasonably be counted as either one or two buttons.

Directly above the volume controls is a ring/silent switch that when engaged mutes telephone ringing, alert sounds from new & sent emails, text messages, and other push notifications, camera shutter sounds, Voice Memo sound effects, phone lock/unlock sounds, keyboard clicks, and spoken autocorrections. This switch does not mute alarm sounds from the Clock application, and in some countries or regions it will not mute the camera shutter or Voice Memo sound effects.[95] All buttons except Home were made of plastic on the original first generation iPhone and metal on all later models. The touchscreen furnishes the remainder of the user interface.

A software update in January 2008[96] allowed the first-generation iPhone to use cell tower and Wi-Fi network locations trilateration,[97] despite lacking GPS hardware. Since the iPhone 3G generation, the smartphone employ A-GPS operated by the United States. Since the iPhone 4S generation the device supports in addition the GLONASS global positioning system, which is operated by Russia.

Sensors

The display responds to three sensors (four since the iPhone 4). Moving the iPhone around triggers two other sensors (three since the iPhone 4), which are used to enable motion-controlled gaming applications and location-based services.

Proximity sensor

proximity sensor deactivates the display and touchscreen when the device is brought near the face during a call. This is done to save battery power and to prevent inadvertent inputs from the user’s face and ears.

Ambient light sensor

An ambient light sensor adjusts the display brightness which in turn saves battery power.

Accelerometer

A 3-axis accelerometer senses the orientation of the phone and changes the screen accordingly, allowing the user to easily switch between portrait and landscape mode.[98] Photo browsing, web browsing, and music playing support both upright and left or right widescreen orientations.[99] Unlike the iPad, the iPhone does not rotate the screen when turned upside-down, with the Home button above the screen, unless the running program has been specifically designed to do so. The 3.0 update added landscape support for still other applications, such as email, and introduced shaking the unit as a form of input.[100][101] The accelerometer can also be used to control third-party apps, notably games.

Magnetometer

A magnetometer is built-in since the iPhone 3GS generation, which is used to measure the strength and/or direction of the magnetic field in the vicinity of the device. Sometimes certain devices or radio signals can interfere with the magnetometer requiring users to either move away from the interference or re-calibrate by moving the device in a figure 8 motion. Since the iPhone 3GS, the iPhone also features a Compass app which was unique at time of release, showing a compass that points in the direction of the magnetic field.

Gyroscopic sensor

Beginning with the iPhone 4 generation, Apple’s smartphones also include a gyroscopic sensor, enhancing its perception of how it is moved.

Audio and output

One of two speakers (left) and the microphone (right) surround the dock connector on the base of the 1st-generation iPhone. If a headset is plugged in, sound is played through it instead.

On the bottom of the iPhone, there is a speaker to the left of the dock connector and a microphone to the right. There is an additional loudspeaker above the screen that serves as an earpiece during phone calls. The iPhone 4 includes an additional microphone at the top of the unit for noise cancellation, and switches the placement of the microphone and speaker on the base on the unit—the speaker is on the right.[102] Volume controls are located on the left side of all iPhone models and as a slider in the iPod application.

The 3.5mm TRRS connector for the headphones is located on the top left corner of the device for the first five generations (original through 4S), after which time it was moved to the bottom left corner.[103] The headphone socket on the 1st-generation iPhone is recessed into the casing, making it incompatible with most headsets without the use of an adapter.[104] Subsequent generations eliminated the problem by using a flush-mounted headphone socket. Cars equipped with an auxiliary jack allow handsfree use of the iPhone while driving as a substitute for Bluetooth.

Apple’s own headset has a multipurpose button near the microphone that can play or pause music, skip tracks, and answer or end phone calls without touching the iPhone. A small number of third-party headsets specifically designed for the iPhone also include the microphone and control button.[105] The current headsets also provide volume controls, which are only compatible with more recent models.[106] A fourth ring in the audio jack carries this extra information.

The built-in Bluetooth 2.x+EDR supports wireless earpieces and headphones, which requires the HSP profile. Stereo audio was added in the 3.0 update for hardware that supportsA2DP.[100][101] While non-sanctioned third-party solutions exist, the iPhone does not officially support the OBEX file transfer protocol.[107] The lack of these profiles prevents iPhone users from exchanging multimedia files, such as pictures, music and videos, with other Bluetooth-enabled cell phones.

Composite[108] or component[109] video at up to 576i and stereo audio can be output from the dock connector using an adapter sold by Apple. iPhone 4 also supports 1024×768 VGA output[110]without audio, and HDMI output,[111] with stereo audio, via dock adapters. The iPhone did not support voice recording until the 3.0 software update.[100][101]

Battery

Replacing the battery requires disassembling the iPhone unit and exposing the internal hardware

The iPhone features an internal rechargeable lithium-ion battery. Like an iPod, but unlike most other mobile phones, the battery is not user-replaceable.[104][112] The iPhone can be charged when connected to a computer for syncing across the included USB to dock connector cable, similar to charging an iPod. Alternatively, a USB to AC adapter (or “wall charger,” also included) can be connected to the cable to charge directly from an AC outlet.

Apple runs tests on preproduction units to determine battery life. Apple’s website says that the battery life “is designed to retain up to 80% of its original capacity after 400 full charge and discharge cycles”,[113] which is comparable to iPod batteries.

The battery life of early models of the iPhone has been criticized by several technology journalists as insufficient and less than Apple’s claims.[114][115][116][117] This is also reflected by a J. D. Power and Associates customer satisfaction survey, which gave the “battery aspects” of the iPhone 3G its lowest rating of 2 out of 5 stars.[118][119]

If the battery malfunctions or dies prematurely, the phone can be returned to Apple and replaced for free while still under warranty.[120] The warranty lasts one year from purchase and can be extended to two years with AppleCare. The battery replacement service and its pricing was not made known to buyers until the day the product was launched,[121][122] it is similar to how Apple (and third parties) replace batteries for iPods. The Foundation for Taxpayer and Consumer Rights, a consumer advocate group, has sent a complaint to Apple and AT&T over the fee that consumers have to pay to have the battery replaced.[121]

Since July 2007, third-party battery replacement kits have been available[123] at a much lower price than Apple’s own battery replacement program. These kits often include a small screwdriver and an instruction leaflet, but as with many newer iPod models the battery in the first generation iPhone has been soldered in. Therefore a soldering iron is required to install the new battery. The iPhone 3G uses a different battery fitted with a connector that is easier to replace.[124]

A patent filed by the corporation, published in late July 2013, revealed the development of a new iPhone battery system that uses location data in combination with data on the user’s habits to moderate the handsets power settings accordingly. Apple is working towards a power management system that will provide features such as the ability of the iPhone to estimate the length of time a user will be away from a power source to modify energy usage and a detection function that adjusts the charging rate to best suit the type of power source that is being used.[125]

The iPhone 4 is the first generation to have two cameras. The LED flash for the rear-facing camera (top) and the forward-facing camera (bottom) are available on the iPhone 4 and subsequent models.

Camera

The 1st-generation iPhone and iPhone 3G have a fixed-focus 2.0-megapixel camera on the back for digital photos. It has no optical zoom, flash orautofocus, and does not natively support video recording. (iPhone 3G can record video via a third-party app available on the App Store, and jailbreakingalso allows users to do so.) iPhone OS 2.0 introduced geotagging for photos.

The iPhone 3GS has a 3.2-megapixel camera with autofocus, auto white balance, and auto macro (up to 10 cm). Manufactured by OmniVision, the camera can also capture 640×480 (VGA resolution) video at 30 frames per second,[126] although unlike higher-end CCD-based video cameras, it exhibits the rolling shutter effect.[127] The video can be cropped on the iPhone and directly uploaded to YouTube, MobileMe, or other services.

The iPhone 4 introduced a 5.0-megapixel camera (2592×1936 pixels) that can record video at 720p resolution, considered high-definition. It also has abackside-illuminated sensor that can capture pictures in low light and an LED flash that can stay lit while recording video.[128] It is the first iPhone that can natively do high dynamic range photography.[129] The iPhone 4 also has a second camera on the front that can take VGA photos and record SDvideo. Saved recordings may be synced to the host computer, attached to email, or (where supported) sent by MMS.

The iPhone 4S’ camera can shoot 8-MP stills and 1080p video, can be accessed directly from the lock screen, and can be triggered using the volume-up button as a shutter trigger. The built-in gyroscope can stabilize the image while recording video.

The iPhone 5 and iPhone 4S, running iOS 6 or later, can take panoramas using the built-in camera app,[130] and the iPhone 5 also can take still photos while recording video.[131]

The camera on the iPhone 5 reportedly shows purple haze when the light source is just out of frame,[132] although Consumer Reports said it “is no more prone to purple hazing on photos shot into a bright light source than its predecessor or than several Android phones with fine cameras…”[133]

On all five model generations, the phone can be configured to bring up the camera app by quickly pressing the home key twice.[134] On all iPhones running iOS 5, it can also be accessed from the lock screen directly.

Beta code found in iOS 7 indicates that Apple may be outfitting the camera of the next iPhone with a slow-motion mode.[135]

Storage and SIM

An iPhone 3G with the SIM slot open. The SIM ejector tool is still placed in the eject hole.

The iPhone was initially released with two options for internal storage size: 4 GB or 8 GB. On September 5, 2007, Apple discontinued the 4 GB models.[136] On February 5, 2008, Apple added a 16 GB model.[137] The iPhone 3G was available in 16 GB and 8 GB. The iPhone 3GS came in 16 GB and 32 GB variants and remained available in 8 GB until September 2012, more than three years after its launch.

The iPhone 4 is available in 16 GB and 32 GB variants, as well as an 8 GB variant to be sold alongside the iPhone 4S at a reduced price point. The iPhone 4S is available in three sizes: 16 GB, 32 GB and 64 GB. All data is stored on the internal flash drive; the iPhone does not support expanded storage through a memory card slot, or the SIM card. The iPhone 5 is available in the same three sizes previously available to the iPhone 4S: 16 GB, 32 GB, and 64 GB.

GSM models of the iPhone use a SIM card to identify themselves to the GSM network. The SIM sits in a tray, which is inserted into a slot at the top of the device. The SIM tray can be ejected with a paper clip or the “SIM ejector tool” (a simple piece of die-cut sheet metal) included with the iPhone 3G and 3GS in the United States and with all models elsewhere in the world.[138][139] Some iPhone models shipped with a SIM ejector tool which was fabricated from an alloy dubbed “Liquidmetal“.[140] In most countries, the iPhone is usually sold with a SIM lock, which prevents the iPhone from being used on a different mobile network.[141]

The GSM iPhone 4 features a MicroSIM card that is located in a slot on the right side of the device.[142]

The CDMA model of the iPhone 4, just the same as any other CDMA-only cell phone, does not use a SIM card or have a SIM card slot.

An iPhone 4S activated on a CDMA carrier, however, does have a SIM card slot but does not rely on a SIM card for activation on that CDMA network. A CDMA-activated iPhone 4S usually has a carrier-approved roaming SIM preloaded in its SIM slot at the time of purchase that is used for roaming on certain carrier-approved international GSM networks only. The SIM slot is locked to only use the roaming SIM card provided by the CDMA carrier.[143]
In the case of Verizon, for example, one can request that the SIM slot be unlocked for international use by calling their support number and requesting an international unlock if their account has been in good standing for the past 60 days.[144] This method only unlocks the iPhone 4S for use on international carriers. An iPhone 4S that has been unlocked in this way will reject any non international SIM cards (AT&T Mobility or T-Mobile USA, for example).

The iPhone 5 uses the nano-SIM, in order to save more space for internal components.

Liquid contact indicators

All iPhones (and many other devices by Apple) have a small disc at the bottom of the headphone jack that changes from white to red on contact with water; the iPhone 3G and later models also have a similar indicator at the bottom of the dock connector.[145] Because Apple warranties do not cover water damage, employees examine the indicators before approving warranty repair or replacement.

The iPhone’s indicators are more exposed than those in some mobile phones from other manufacturers, which carry them in a more protected location, such as beneath the battery behind a battery cover. The iPhone’s can be triggered during routine use, by an owner’s sweat,[146] steam in a bathroom, and other light environmental moisture.[147] Criticism led Apple to change its water damage policy for iPhones and similar products, allowing customers to request further internal inspection of the phone to verify if internal liquid damage sensors were triggered.[148]

Included items

The contents of the box of an iPhone 4. From left to right: iPhone 4 in plastic holder, written documentation, and (top to bottom) headset, USB cable, wall charger.

All iPhone models include written documentation, and a dock connector to USB cable. The first generation and 3G iPhones also came with a cleaning cloth. The first generation iPhone included a stereo headset (earbuds and a microphone) and a plastic dock to hold the unit upright while charging and syncing. The iPhone 3G includes a similar headset plus a SIM eject tool (the first generation model requires a paperclip). The iPhone 3GS includes the SIM eject tool and a revised headset, which adds volume buttons (not functional with previous iPhone versions).[106][149]

The iPhone 3G and 3GS are compatible with the same dock, sold separately, but not the first generation model’s dock.[150] All versions include a USB power adapter, or “wall charger,” which allows the iPhone to charge from an AC outlet. The iPhone 3G and iPhone 3GS sold in North America, Japan, Colombia, Ecuador, or Peru[151][152] include an ultracompact USB power adapter.

Software

Main articles: iOS and iOS version history

The iPhone Home screen of iOS 7 shows most of the applications provided by Apple. Users can download additional applications from the App store, create Web Clips, rearrange the icons, and create and delete folders.

The iPhone, iPod Touch and iPad run an operating system known as iOS (formerly iPhone OS). It is a variant of the same Darwin operating system core that is found in Mac OS X. Also included is the “Core Animation” software component from Mac OS X v10.5 Leopard. Together with the PowerVR hardware (and on the iPhone 3GS, OpenGL ES 2.0), it is responsible for the interface’s motion graphics. The operating system takes up less than half a gigabyte.[153]

It is capable of supporting bundled and future applications from Apple, as well as from third-party developers. Software applications cannot be copied directly from Mac OS X but must be written and compiled specifically for iOS.

Like the iPod, the iPhone is managed from a computer using iTunes. The earliest versions of the OS required version 7.3 or later, which is compatible with Mac OS X version 10.3.9 Panther or later, and 32-bit Windows XP or Vista.[154] The release of iTunes 7.6 expanded this support to include 64-bit versions of XP and Vista,[155] and a workaround has been discovered for previous 64-bit Windows operating systems.[156]

Apple provides free updates to the OS for the iPhone through iTunes,[153] and major updates have historically accompanied new models.[157] Such updates often require a newer version of iTunes—for example, the 3.0 update requires iTunes 8.2—but the iTunes system requirements have stayed the same. Updates include bug fixes, security patches and new features.[158] For example, iPhone 3G users initially experienced dropped calls until an update was issued.[159][160]

Version 3.1 required iTunes 9.0, and iOS 4 required iTunes 9.2. iTunes 10.5, which is required to sync and activate iOS 5, requires Mac OS X 10.5.8 or Leopard on G4 or G5 computers on 800 MHz or higher; versions 10.3 and 10.4 and 10.5–10.5.7 are no longer supported.

Interface

The interface is based around the home screen, a graphical list of available applications. iPhone applications normally run one at a time. Starting with the iPhone 4, a primitive version of multitasking came into play. Users could double click the home button to select recently opened.[161] However, the apps never ran in the background. Starting with iOS 7, though, apps can truly multitask, and each open application runs in the background when not.[162] S although most functionality is still available when making a call or listening to music. The home screen can be accessed at any time by a hardware button below the screen, closing the open application in the process.[163]

By default, the Home screen contains the following icons: Messages (SMS and MMS messaging), Calendar, Photos, Camera, YouTube, Stocks, Maps (Google Maps), Weather, Voice Memos, Notes, Clock, Calculator, Settings, iTunes (store)App Store, (on the iPhone 3GS and iPhone 4) Compass, FaceTime and GameCenter were added in iOS 4.0 and 4.1 respectively. In iOS 5, Reminders and Newsstand were added, as well as the iPod application split into separate Music and Videos applications. iOS 6 added Passbook as well as an updated version of Maps that relies on data provided by TomTom as well as other sources. iOS 6 also added a Clock application onto the iPad’s homescreen. However, it also no longer support YouTube. Docked at the base of the screen, four icons for PhoneMailSafari (Internet), and Music delineate the iPhone’s main purposes.[164]On January 15, 2008, Apple released software update 1.1.3, allowing users to create “Web Clips”, home screen icons that resemble apps that open a user-defined page in Safari. After the update, iPhone users can rearrange and place icons on up to nine other adjacent home screens, accessed by a horizontal swipe.[96]

Users can also add and delete icons from the dock, which is the same on every home screen. Each home screen holds up to twenty icons for iPhone 2G3G4 and 4S, while each home screen for iPhone 5 will hold up to twenty-four icons due to a larger screen display, and the dock holds up to four icons. Users can delete Web Clips and third-party applications at any time, and may select only certain applications for transfer from iTunes. Apple’s default programs, however, may not be removed. The 3.0 update adds a system-wide search, known as Spotlight, to the left of the first home screen.[100][101]

Almost all input is given through the touch screen, which understands complex gestures using multi-touch. The iPhone’s interaction techniques enable the user to move the content up or down by a touch-drag motion of the finger. For example, zooming in and out of web pages and photos is done by placing two fingers on the screen and spreading them farther apart or bringing them closer together, a gesture known as “pinching“.

Scrolling through a long list or menu is achieved by sliding a finger over the display from bottom to top, or vice versa to go back. In either case, the list moves as if it is pasted on the outer surface of a wheel, slowly decelerating as if affected by friction. In this way, the interface simulates the physics of a real object.

Other user-centered interactive effects include horizontally sliding sub-selection, the vertically sliding keyboard and bookmarks menu, and widgets that turn around to allow settings to be configured on the other side. Menu bars are found at the top and bottom of the screen when necessary. Their options vary by program, but always follow a consistent style motif. In menu hierarchies, a “back” button in the top-left corner of the screen displays the name of the parent folder.

Phone

When making a call, the iPhone presents a number of options; including FaceTime on supported models. The screen isautomatically disabled when held close to the face.

The iPhone allows audio conferencing, call holding, call merging, caller ID, and integration with other cellular network features and iPhone functions. For example, if music is playing when a call is received, the music fades out, and fades back in when the call has ended.

The proximity sensor shuts off the screen and touch-sensitive circuitry when the iPhone is brought close to the face, both to save battery and prevent unintentional touches. The iPhone does not support video calling or videoconferencing on versions prior to the fourth generation, as there is only one camera on the opposite side of the screen.[165]

The iPhone 4 supports video calling using either the front or back camera over Wi-Fi, a feature Apple calls FaceTime.[166] The first two models only supportvoice dialing through third-party applications.[167] Voice control, available only on the iPhone 3GS and iPhone 4, allows users to say a contact’s name or number and the iPhone will dial.[168]

The iPhone includes a visual voicemail (in some countries)[169] feature allowing users to view a list of current voicemail messages on-screen without having to call into their voicemail. Unlike most other systems, messages can be listened to and deleted in a non-chronological order by choosing any message from an on-screen list.

A music ringtone feature was introduced in the United States on September 5, 2007. Users can create custom ringtones from songs purchased from the iTunes Store for a small additional fee. The ringtones can be 3 to 30 seconds long from any part of a song, can fade in and out, pause from half a second to five seconds when looped, or loop continuously. All customizing can be done in iTunes,[170] or alternatively with Apple’s GarageBand software 4.1.1 or later (available only on Mac OS X)[171] or third-party tools.[172]

With the release of iOS 6, which was released on September 19, 2012, Apple added features that enable the user to have options to decline a phone call when a person is calling them. The user has the capability to reply with a message, or to set a reminder to call them back at a later time.[173]

On September 12, 2012, Apple unveiled the iPhone 5, the sixth iteration of the iPhone. New features included a bigger 4-inch screen, thinner design and 4G LTE.

Multimedia

The layout of the music library is similar to that of an iPod or current Symbian S60 phones. The iPhone can sort its media library by songs, artists, albums, videos, playlistsgenres, composers,podcastsaudiobooks, and compilations. Options are always presented alphabetically, except in playlists, which retain their order from iTunes. The iPhone uses a large font that allows users plenty of room to touch their selection.

Users can rotate their device horizontally to landscape mode to access Cover Flow. Like on iTunes, this feature shows the different album covers in a scroll-through photo library. Scrolling is achieved by swiping a finger across the screen. Alternatively, headset controls can be used to pause, play, skip, and repeat tracks. On the iPhone 3GS, the volume can be changed with the included Apple Earphones, and the Voice Control feature can be used to identify a track, play songs in a playlist or by a specific artist, or create a Genius playlist.[168]

The iPhone supports gapless playback.[174] Like the fifth-generation iPods introduced in 2005, the iPhone can play digital video, allowing users to watch TV shows and movies in widescreen. Double-tapping switches between widescreen and fullscreen video playback.

The iPhone allows users to purchase and download songs from the iTunes Store directly to their iPhone. The feature originally required a Wi-Fi network, but now since 2012 can use the cellular data network if one is not available.[175]

The iPhone includes software that allows the user to upload, view, and email photos taken with the camera. The user zooms in and out of photos by sliding two fingers further apart or closer together, much like Safari. The Camera application also lets users view the camera roll, the pictures that have been taken with the iPhone’s camera. Those pictures are also available in the Photos application, along with any transferred from iPhoto or Aperture on a Mac, or Photoshop on a Windows PC.

Internet connectivity

Wikipedia Main Page on the iPhone Safari web browser in landscape mode

Internet access is available when the iPhone is connected to a local area Wi-Fi or a wide area GSM or EDGE network, both second-generation (2G) wireless data standards. The iPhone 3G introduced support for third-generation UMTS and HSDPA 3.6,[176] only the iPhone 4S supports HSUPAnetworks (14.4 Mbit/s), and only the iPhone 3GS and iPhone 4 support HSDPA 7.2.[177]

AT&T introduced 3G in July 2004,[178] but as late as 2007, Steve Jobs stated that it was still not widespread enough in the US, and the chipsets not energy efficient enough, to be included in the iPhone.[91][179] Support for 802.1X, an authentication system commonly used by university and corporate Wi-Fi networks, was added in the 2.0 version update.[180]

By default, the iPhone will ask to join newly discovered Wi-Fi networks and prompt for the password when required. Alternatively, it can join closed Wi-Fi networks manually.[181] The iPhone will automatically choose the strongest network, connecting to Wi-Fi instead of EDGE when it is available.[182]Similarly, the iPhone 3G, 3GS and 4 prefer 3G to 2G, and Wi-Fi to either.[183]

Wi-Fi, Bluetooth, and 3G (on the iPhone 3G onwards) can all be deactivated individually. Airplane mode disables all wireless connections at once, overriding other preferences. However, once in Airplane mode, one can explicitly enable Wi-Fi and/or Bluetooth modes to join and continue to operate over one or both of those networks while the cellular network transceivers remain off.

The iPhone 3GS has a maximum download rate of 7.2 Mbit/s.[184] Furthermore, email attachments as well as apps and media from Apple’s various stores must be smaller than 20 MB to be downloaded over a cellular network.[185] Larger files, often email attachments or podcasts, must be downloaded over Wi-Fi (which has no file size limits). If Wi-Fi is unavailable, one workaroundis to open the files directly in Safari.[186]

Safari is the iPhone’s native web browser, and it displays pages similar to its Mac and Windows counterparts. Web pages may be viewed in portrait or landscape mode and the device supports automatic zooming by pinching together or spreading apart fingertips on the screen, or by double-tapping text or images.[187][188] It is worth mentioning that Safari doesn’t allow file downloads except for predefined extensions. The iPhone does not support Flash.[189]

Consequently, the UK’s Advertising Standards Authority adjudicated that an advertisement claiming the iPhone could access “all parts of the internet” should be withdrawn in its current form, on grounds of false advertising.[190] In a rare public letter in April 2010, Apple CEO Steve Jobs outlined the reasoning behind the absence of Flash on the iPhone (and iPad).[191] The iPhone supports SVGCSSHTML Canvas, and Bonjour.[192][193]

Google Chrome was introduced to the iOS on June 26, 2012. In a review by Chitika on July 18, 2012, they announced that the Google Chrome web browser has 1.5% of the iOS web browser market since its release.[194]

The maps application can access Google Maps in map, satellite, or hybrid form. It can also generate directions between two locations, while providing optional real-time traffic information. During the iPhone’s announcement, Jobs demonstrated this feature by searching for nearby Starbucks locations and then placing a prank call to one with a single tap.[195][196] Support for walking directions, public transit, and street view was added in the version 2.2 software update, but no voice-guided navigation.[197]

The iPhone 3GS and iPhone 4 can orient the map with its digital compass.[198] Apple also developed a separate application to view YouTube videos on the iPhone, which streams videos after encoding them using the H.264 codec. Simple weather and stock quotes applications also tap into the Internet.

iPhone users can and do access the Internet frequently, and in a variety of places. According to Google, in 2008, the iPhone generated 50 times more search requests than any other mobile handset.[199] According to Deutsche Telekom CEO René Obermann, “The average Internet usage for an iPhone customer is more than 100 megabytes. This is 30 times the use for our average contract-based consumer customers.”[200] Nielsen found that 98% of iPhone users use data services, and 88% use the internet.[37] In China, the iPhone 3G and iPhone 3GS were built and distributed without Wi-Fi.[201]

With the introduction of the Verizon iPhone in January 2011, the issue of using internet while on the phone has been brought to the public’s attention. Under the two US carriers, internet and phone could be used simultaneously on AT&T networks, whereas Verizon networks only support the use of each separately.[202]

Text input

The virtual keyboard on the iPhone (first gen) touchscreen

For text input, the iPhone implements a virtual keyboard on the touchscreen. It has automatic spell checking and correction, predictive wordcapabilities, and a dynamic dictionary that learns new words. The keyboard can predict what word the user is typing and complete it, and correct for the accidental pressing of keys near the presumed desired key.[203]

The keys are somewhat larger and spaced farther apart when in landscape mode, which is supported by only a limited number of applications. Touching a section of text for a brief time brings up a magnifying glass, allowing users to place the cursor in the middle of existing text. The virtual keyboard can accommodate 21 languages, including character recognition for Chinese.[204]

Alternate characters with accents (for example, letters from the alphabets of other languages) can be typed from the keyboard by pressing the letter for 2 seconds and selecting the alternate character from the popup.[205] The 3.0 update brought support for cut, copy, or pasting text, as well as landscape keyboards in more applications.[100][101] On iPhone 4S, Siri allows dictation.

Email and text messages

The iPhone also features an email program that supports HTML email, which enables the user to embed photos in an email message. PDFWord,Excel, and PowerPoint attachments to mail messages can be viewed on the phone.[206] Apple’s MobileMe platform offers push email, which emulates the functionality of the popular BlackBerry email solution, for an annual subscription. Yahoo! offers a free push-email service for the iPhone. IMAP(although not Push-IMAP) and POP3 mail standards are also supported, including Microsoft Exchange[207] and Kerio Connect.[208]

In the first versions of the iPhone firmware, this was accomplished by opening up IMAP on the Exchange server. Apple has also licensed Microsoft ActiveSync and now[when?] supports the platform (including push email) with the release of iPhone 2.0 firmware.[209][210] The iPhone will sync email account settings over from Apple’s own Mail application, Microsoft Outlook, andMicrosoft Entourage, or it can be manually configured on the device itself. With the correct settings, the email program can access almost any IMAP or POP3 account.[211]

Text messages are presented chronologically in a mailbox format similar to Mail, which places all text from recipients together with replies. Text messages are displayed in speech bubbles (similar to iChat) under each recipient’s name. The iPhone has built-in support for email message forwarding, drafts, and direct internal camera-to-email picture sending. Support for multi-recipient SMS was added in the 1.1.3 software update.[212] Support for MMS was added in the 3.0 update, but not for the original first generation iPhone[100][101] and not in the US until September 25, 2009.[213][214]

Third-party applications

See also: iOS SDK and App Store

At WWDC 2007 on June 11, 2007, Apple announced that the iPhone would support third-party web applications using Ajax that share the look and feel of the iPhone interface.[215] On October 17, 2007, Steve Jobs, in an open letter posted to Apple’s “Hot News” weblog, announced that a software development kit (SDK) would be made available to third-party developers in February 2008. The iPhone SDK was officially announced and released on March 6, 2008, at the Apple Town Hall facility.[216]

It is a free download, with an Apple registration, that allows developers to develop native applications for the iPhone and iPod Touch, then test them in an “iPhone simulator”. However, loading an application onto a real device is only possible after paying an Apple Developer Connection membership fee. Developers are free to set any price for their applications to be distributed through theApp Store, of which they will receive a 70% share.[217]

Developers can also opt to release the application for free and will not pay any costs to release or distribute the application beyond the membership fee. The App Store was launched with the release of iOS 2.0, on July 11, 2008.[210] The update was free for iPhone users; owners of older iPod Touches were required to pay US$10 for it.[218]

Once a developer has submitted an application to the App Store, Apple holds firm control over its distribution. Apple can halt the distribution of applications it deems inappropriate, for example, I Am Rich, a US$1000 program that simply demonstrated the wealth of its user.[219] Apple has been criticized for banning third-party applications that enable a functionality that Apple does not want the iPhone to have: In 2008, Apple rejected Podcaster, which allowed iPhone users to download podcasts directly to the iPhone claiming it duplicated the functionality of iTunes.[220] Apple has since released a software update that grants this capability.[197]

NetShare, another rejected app, would have enabled users to tether their iPhone to a laptop or desktop, using its cellular network to load data for the computer.[221] Many carriers of the iPhone later globally allowed tethering before Apple officially supported it with the upgrade to the iOS 3.0, with AT&T Mobility being a relative latecomer in the United States.[222] In most cases, the carrier charges extra for tethering an iPhone.

Before the SDK was released, third-parties were permitted to design “Web Apps” that would run through Safari.[223] Unsigned native applications are also available for “jailbroken” phones.[224]The ability to install native applications onto the iPhone outside of the App Store is not supported by Apple, the stated reason being that such native applications could be broken by any software update, but Apple has stated it will not design software updates specifically to break native applications other than those that perform SIM unlocking.[225]

As of October 2013, Apple has passed 60 billion app downloads.[226]

Accessibility

The iPhone can enlarge text to make it more accessible for vision-impaired users,[227] and can accommodate hearing-impaired users with closed captioning and external TTY devices.[228] The iPhone 3GS also features white on black mode, VoiceOver (a screen reader), and zooming for impaired vision, and mono audio for limited hearing in one ear.[229] Apple regularly publishes Voluntary Product Accessibility Templates which explicitly state compliance with the US regulation “Section 508“.[230]

Vulnerability

See also: Mobile security

In 2007, 2010, and 2011, developers released a series of tools called JailbreakMe that used security vulnerabilities in Mobile Safari rendering in order to jailbreak the device (which allows users to install any compatible software on the device instead of only App Store apps).[231][232][233] These exploits were each soon fixed by iOS updates from Apple. Theoretically these flaws could have also been used for malicious purposes.[234]

In July 2011, Apple released iOS 4.3.5 (4.2.10 for CDMA iPhone) to fix a security vulnerability with certificate validation.

The American and British intelligence agencies, the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ) respectively, have access to the user data in iPhones. They are able to read almost all information on the phone, including SMS, location, emails, and notes.[235]

Following the release of the iPhone 5s model, a group of German hackers called the Chaos Computer Club announced on September 21, 2013 that they had bypassed Apple’s new Touch ID fingerprint sensor by using “easy everyday means.” The group explained that the security system had been defeated by photographing a fingerprint from a glass surface and using that captured image as verification. The spokesman for the group stated: “We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can’t change and that you leave everywhere every day as a security token.”[236][237]

Model comparison

Discontinued Current
Model iPhone (first generation) iPhone 3G iPhone 3GS iPhone 4 iPhone 4S iPhone 5 iPhone 5C iPhone 5S
Initialoperating system iPhone OS 1.0 iPhone OS 2.0 iPhone OS 3.0 iOS 4.0 (GSM)
iOS 4.2.5 (CDMA)
iOS 5.0 iOS 6.0 iOS 7.0
Highest supported operating system iPhone OS 3.1.3 iOS 4.2.1 iOS 6.1.3 iOS 7.0.4
Display 3.5 in (89 mm), 3:2 aspect ratio, scratch-resistant[7]glossy glass covered screen, 262,144-color (18-bit)TN LCD, 480 × 320 px (HVGA) at 163 ppi, 200:1contrast ratio In addition to prior, features a fingerprint-resistant oleophobiccoating,[238] and 262,144-color (18-bit) TN LCD with hardware spatial dithering[9] 3.5 in (89 mm), 3:2 aspect ratio,aluminosilicate glass covered 16,777,216-color (24-bit) IPS LCD screen, 960 × 640 px at 326 ppi, 800:1 contrast ratio, 500 cd max brightness 4 in (100 mm), 71:40 aspect ratio, 1136 x 640 px screen resolution at 326 ppi
Storage 4, 8 or 16 GB 8 or 16 GB 8, 16 or 32 GB 8, 16, 32 or 64 GB 16, 32 or 64 GB 16 or 32 GB 16, 32 or 64 GB
Processor 620 MHz (underclocked to 412 MHz) Samsung 32-bitRISC ARM (32 KB L1) 1176JZ(F)-S v1.0[239][240] 833 MHz (underclocked to 600 MHz)ARM Cortex-A8[11][241]
Samsung S5PC100[11][242] (64 KB L1 + 256 KB L2)
GHz(underclocked to 800 MHz) ARMCortex-A8 Apple A4(SoC)[243] 1 GHz (underclocked to 800 MHz) dual-coreARM Cortex-A9Apple A5 (SoC)[244] 1.3 GHz dual-core Apple-designed ARMv7s Apple A6[245] 1.3 GHz dual-core Apple-designedARMv8-A 64-bitApple A7 withM7 motion coprocessor[246]
Bus frequency and width 103 MHz (32-bit) 100 MHz (32-bit) 100 MHz (64-bit) 250 MHz (64-bit)
Graphics PowerVR MBX Lite 3D GPU[10] (103 MHz) PowerVR SGX535 GPU
(150 MHz in 3GS and 200 MHz in iPhone 4)[11][12]
PowerVRSGX543MP2 (dual-core, 200 MHz) GPU[13] PowerVR SGX543MP3 (tri-core, 266 MHz) GPU PowerVR G6430(four cluster) GPU.[247]
Memory 128 MB LPDDR DRAM[248] (137 MHz) 256 MB LPDDR DRAM[11][241](200 MHz) 512 MB LPDDR2 DRAM[249][250][251][252][253] (200 MHz) 1 GB LPDDR2 DRAM[254][255] 1 GB LPDDR3 DRAM[256]
Connector USB 2.0 dock connector Lightning connector
Connectivity Wi-Fi (802.11 b/g) Wi-Fi (802.11 b/g/n) Wi-Fi (802.11 a/b/g/n)
GPS No Yes
Digital compass No Yes
Bluetooth Bluetooth 2.0 + EDR (Cambridge Bluecore4)[257] Bluetooth 2.1 + EDR (Broadcom 4325),[258] Bluetooth 4.0
Cellular Quad band GSM/GPRS/EDGE(850, 900, 1,800, 1,900 MHz) In addition to prior:
Tri-band 3.6 MbpsUMTS/HSDPA (850, 1,900, 2,100 MHz),[259]
In addition to prior:
7.2 Mbit/s HSDPA
In addition to prior:
Penta-band UMTS/HSDPA (800, 850, 900, 1,900, 2,100 MHz),[102][260]
5.76 Mbit/s HSUPA
In addition to prior:
14.4 Mbit/s HSDPA (4G on AT&T),
Dynamically switching dual antenna,[261]
Combined GSM/CDMA World phone ability
In addition to prior: LTE, HSPA+ and DC-HSDPA
CDMA model:
Dual-bandCDMA/EV-DO Rev. A (800, 1,900 MHz)
SIM card form-factor Mini-SIM Micro-SIM Nano-SIM
Additional Features Wi-Fi (802.11b/g)
USB power adapter
earphones with remote and mic
In addition to prior:
Assisted GPS
In addition to prior:
Voice control
Digital compass
Nike+
Volume controls on earphones
In addition to prior:
Wi-Fi (802.11b/g/n) [802.11n on 2.4 GHz]
3-axis gyroscope
Dual-mic noise suppression
In addition to prior:
GLONASS support
Siri voice assistant
In addition to prior:
Wi-Fi(802.11a/b/g/n) [802.11n on 2.4 GHz and 5 GHz][262]
Triple microphone noise suppression
Revised iPod earpods
None in addition to prior In addition to iPhone 5:
Touch ID (finger-print scanner in home button)
Cameras Back MP f/2.8 3 MP photos, VGA(480pvideo at 30 fps,macro focus 5 MP photos, f/2.8,720p HD video (30 fps), Back-illuminated sensor,LED flash 8 MP photos, f/2.4,1080p HD video (30 fps), Back-illuminated sensor,face detection,video stabilization,panorama 8 MP photos with 1.4µ pixels, f/2.4,1080p HD video (30 fps), Infrared cut-off filterBack-illuminated sensorface detectionvideo stabilizationpanorama and ability to take photos while shooting videos 8 MP photos with 1.5µ pixels,f/2.2 aperture,1080p HD video (30 fps) or 720 HD video slo-mo video at 120 fps, improved video stabilization,True Tone flash,Infrared cut-off filterBack-illuminated sensorface detection,panorama, ability to take photos while shooting videos and Burst mode
Front No VGA (0.3 MP) photos and videos (30 fps) 1.2 MP photos with 1.75µ pixels,720p HD video (30 fps), Back-illuminated sensor 1.2 MP photos with 1.9µ pixels,720p HD video (30 fps), Back-illuminated sensor
Audio codec Wolfson MicroelectronicsWM8758BG[263] Wolfson Microelectronics WM6180C[264] Cirrus Logic CS42L61 (CLI1495B0; 338S0589)[265][266] Cirrus Logic CLI1560B0 (338S0987)[267][268] Cirrus Logic CLI1583B0/CS35L19 (338S1077)[269]
Materials Aluminum, glass, steel, and black plastic Glass, plastic, and steel; black or white
(white not available for 8 GB models)
Black or white aluminosilicate glass andstainless steel Black with anodized aluminium “Slate” metal or white with “Silver” aluminium metal White, pink, yellow, blue or green polycarbonate Silver (white front with “Silver” aluminium metal back), Space Gray (Black front with anodized aluminium “Space Gray” metal back) or Gold (white front with anodized aluminium “Gold” metal back)
Power Built-in non-removable rechargeable lithium-ion polymer battery[256][270][271][272]
3.7 V 5.18 W·h (1,400 mA·h)[9] 3.7 V 4.12 W·h(1,150 mA·h)[271][273] 3.7 V 4.51 W·h(1,219 mA·h)[274] 3.7 V 5.25 W·h(1,420 mA·h)[275] 3.7 V 5.3 W·h(1,432 mA·h)[276] 3.8 V 5.45 W·h(1,440 mA·h)[256] 3.8 V 5.73 W·h(1,507 mA·h)[256] 3.8 V 5.96 W·h(1,570 mA·h)[256]
Rated battery life (hours) audio: 24
video: 7
Talk over 2G: 8
Browsing internet: 6
Standby: 250
audio: 24
video: 7
Talk over 3G: 5
Browsing over 3G: 5
Browsing over Wi-Fi: 9
Standby: 300
audio: 30
video: 10
Talk over 3G: 5
Browsing over 3G: 5
Browsing over Wi-Fi: 9
Standby: 300
audio: 40
video: 10
Talk over 3G: 7
Browsing over 3G: 6
Browsing over Wi-Fi: 10
Standby: 300[277]
audio: 40
video: 10
Talk over 3G: 8
Browsing over 3G: 6
Browsing over Wi-Fi: 9
Standby: 200
audio: 40
video: 10
Talk over 3G: 8
Browsing over 3G: 8
Browsing over LTE: 8
Browsing over Wi-Fi: 10
Standby: 225
audio: 40
video: 10
Talk over 3G: 10
Browsing over 3G: 8
Browsing over LTE: 10
Browsing over Wi-Fi: 10
Standby: 250
Dimensions 115 mm (4.5 in) H
61 mm (2.4 in) W
11.6 mm (0.46 in) D
115.5 mm (4.55 in) H
62.1 mm (2.44 in) W
12.3 mm (0.48 in) D
115.2 mm (4.54 in) H
58.6 mm (2.31 in) W
9.3 mm (0.37 in) D
123.8 mm (4.87 in) H
58.6 mm (2.31 in) W
7.6 mm (0.30 in) D
124.4 mm (4.90 in) H
59.2 mm (2.33 in) W
8.97 mm (0.353 in) D
123.8 mm (4.87 in) H
58.6 mm (2.31 in) W
7.6 mm (0.30 in) D
Weight 135 g (4.8 oz) 133 g (4.7 oz) 135 g (4.8 oz) 137 g (4.8 oz) 140 g (4.9 oz) 112 g (4.0 oz) 132 g (4.7 oz) 112 g (4.0 oz)
Model Number[278] A1203 A1324 (China)
A1241
A1325 (China)
A1303
A1349 (CDMA model)
A1332 (GSM model)
A1431 (GSM China)
A1387
A1428 (GSM model)
A1429 (GSM and CDMA model)
A1442 (CDMA model, China)
A1532 (North America)
A1456 (US & Japan)
A1507 (Europe)
A1529 (Asia & Oceania)
A1533 (North America)
A1453 (US & Japan)
A1457 (Europe)
A1530 (Asia & Oceania)
Released 4, 8 GB: June 29, 2007
16 GB: February 5, 2008
All models: July 11, 2008 16, 32 GB: June 19, 2009
8 GB black: June 24, 2010
16, 32 GB: June 24, 2010
CDMA: February 10, 2011
White: April 28, 2011
8 GB: October 14, 2011
16, 32, 64 GB: October 14, 2011
8 GB: September 20, 2013
All models: September 21, 2012 All models: September 20, 2013 All models: September 20, 2013
Discontinued 4 GB: September 5, 2007
8, 16 GB: July 11, 2008
16 GB: June 8, 2009
8 GB black: June 7, 2010
16, 32 GB: June 24, 2010
8 GB black: September 12, 2012
16, 32 GB: October 4, 2011
8 GB: September 10, 2013
32, 64 GB: September 12, 2012
16 GB: September 10, 2013
8 GB: In Production
All models: September 10, 2013 In Production In Production

Intellectual property

Apple has filed more than 200 patent applications related to the technology behind the iPhone.[279][280]

LG Electronics claimed the design of the iPhone was copied from the LG Prada. Woo-Young Kwak, head of LG Mobile Handset R&D Center, said at a press conference: “we consider that Apple copied Prada phone after the design was unveiled when it was presented in the iF Design Award and won the prize in September 2006.”[281]

On September 3, 1993, Infogear filed for the US trademark “I PHONE”[282] and on March 20, 1996, applied for the trademark “IPhone”.[283] “I Phone” was registered in March 1998,[282] and “IPhone” was registered in 1999.[283] Since then, the I PHONE mark had been abandoned.[282] Infogear trademarks cover “communications terminals comprising computer hardware and software providing integrated telephone, data communications and personal computer functions” (1993 filing),[282] and “computer hardware and software for providing integrated telephone communication with computerized global information networks” (1996 filing).[284]

Infogear released a telephone with an integrated web browser under the name iPhone in 1998.[285] In 2000, Infogear won an infringement claim against the owners of the iphones.com domain name.[286] In June 2000, Cisco Systems acquired Infogear, including the iPhone trademark.[287] On December 18, 2006, they released a range of re-branded Voice over IP (VoIP) sets under the name iPhone.[288]

In October 2002, Apple applied for the “iPhone” trademark in the United Kingdom, Australia, Singapore, and the European Union. A Canadian application followed in October 2004, and a New Zealand application in September 2006. As of October 2006, only the Singapore and Australian applications had been granted. In September 2006, a company called Ocean Telecom Services applied for an “iPhone” trademark in the United States, United Kingdom and Hong Kong, following a filing in Trinidad and Tobago.[289]

As the Ocean Telecom trademark applications use exactly the same wording as the New Zealand application of Apple, it is assumed that Ocean Telecom is applying on behalf of Apple.[290] The Canadian application was opposed in August 2005, by a Canadian company called Comwave who themselves applied for the trademark three months later. Comwave has been selling VoIP devices called iPhone since 2004.[287]

Shortly after Steve Jobs’ January 9, 2007, announcement that Apple would be selling a product called iPhone in June 2007, Cisco issued a statement that it had been negotiating trademark licensing with Apple and expected Apple to agree to the final documents that had been submitted the night before.[291] On January 10, 2007, Cisco announced it had filed a lawsuit against Apple over the infringement of the trademark iPhone, seeking an injunction in federal court to prohibit Apple from using the name.[292] More recently,[when?] Cisco claimed that the trademark lawsuit was a “minor skirmish” that was not about money, but about interoperability.[293]

On February 2, 2007, Apple and Cisco announced that they had agreed to temporarily suspend litigation while they held settlement talks,[294] and subsequently announced on February 20, 2007, that they had reached an agreement. Both companies will be allowed to use the “iPhone” name[295] in exchange for “exploring interoperability” between their security, consumer, and business communications products.[296]

The iPhone has also inspired several leading high-tech clones,[297] driving both the popularity of Apple and consumer willingness to upgrade iPhones quickly.[298]

On October 22, 2009, Nokia filed a lawsuit against Apple for infringement of its GSM, UMTS and WLAN patents. Nokia alleges that Apple has been violating ten of the patents of Nokia since the iPhone initial release.[299]

In December 2010, Reuters reported that some iPhone and iPad users were suing Apple Inc. because some applications were passing user information to third-party advertisers without permission. Some makers of the applications such as Textplus4, Paper TossThe Weather ChannelDictionary.com, Talking Tom Cat and Pumpkin Maker have also been named as co-defendants in the lawsuit.[300]

In August 2012, Apple won a smartphone patent lawsuit in the USA against Samsung, the world’s largest maker of smartphones.[301]

In March 2013, an Apple patent for a wraparound display was revealed.[302]

Secret tracking

Since April 20, 2011, a hidden unencrypted file on the iPhone and other iOS devices has been widely discussed in the media.[303][304] It was alleged that the file, labeled “consolidated.db”, constantly stores the iPhone user’s movement by approximating geographic locations calculated by triangulating nearby cell phone towers, a technology proven to be inaccurate at times.[305]The file was released with the June 2010 update of Apple iOS4 and may contain almost one year’s worth of data. Previous versions of iOS stored similar information in a file called “h-cells.plist”.[306]

F-Secure discovered that the data is transmitted to Apple twice a day and postulate that Apple is using the information to construct their global location database similar to the ones constructed by Google and Skyhook through wardriving.[307] Nevertheless, unlike the Google “Latitude” application, which performs a similar task on Android phones, the file is not dependent upon signing a specific EULA or even the user’s knowledge, but it is stated in the 15,200 word-long terms and conditions of the iPhone that “Apple and [their] partners and licensees may collect, use, and share precise location data, including the real-time geographic location of [the user’s] Apple computer or device”.[308]

The file is also automatically copied onto the user’s computer once synchronized with the iPhone. An open source application named “iPhoneTracker”, which turns the data stored in the file into a visual map, was made available to the public in April 2011.[309] While the file cannot be erased without jailbreaking the phone, it can be encrypted.[310]

Apple gave an official response on their web site on April 27,[311] 2011, after questions were submitted by users, the Associated Press and others.[312] Apple clarified that the data is a small portion of their crowd-sourced location database cache of Wi-Fi hotspots and cell towers which is downloaded from Apple into the iPhone for making location services faster than with only GPS, therefore the data does not represent the locations of the iPhone. The volume of data retained was an error. Apple issued an update for iOS (version 4.3.3, or 4.2.8 for the CDMA iPhone 4) which reduced the size of the cache, stopped it being backed up to iTunes, and erased it entirely whenever location services were turned off.[311] The upload to Apple can also be selectively disabled from “System services”, “Cell Network Search.”

Intelligence agency access

It was revealed as apart of the 2013 mass surveillance disclosures that the American and British intelligence agencies, the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ) respectively, have access to the user data in iPhones, BlackBerrys, and Android phones. They are able to read almost all smartphone information, including SMS, location, emails, and notes.[235]

Restrictions

Jailbroken iPod Touch on iOS3.0. The serial number and Wi-Fi address have been removed from the image.

Apple tightly controls certain aspects of the iPhone. According to Jonathan Zittrain, the emergence of closed devices like the iPhone have made computing more proprietary than early versions of Microsoft Windows.[313]

The hacker community has found many workarounds, most of which are disallowed by Apple and make it difficult or impossible to obtain warranty service.[314]Jailbreaking” allows users to install apps not available on the App Store or modify basic functionality. SIM unlocking allows the iPhone to be used on a different carrier’s network.[315] However, in the United States, Apple cannot void an iPhone’s warranty unless it can show that a problem or component failure is linked to the installation or placement of after-market item such as unauthorized applications, because of the Federal Trade Commission‘s Magnuson-Moss Warranty Act of 1975[316]

The iPhone also has an area and settings where parents can set restriction or parental controls[317] on apps that can be downloaded or used within the iPhone. The restrictions area will require a password.[318]

Activation

The iPhone normally prevents access to its media player and web features unless it has also been activated as a phone with an authorized carrier. On July 3, 2007, Jon Lech Johansen reported on his blog that he had successfully bypassed this requirement and unlocked the iPhone’s other features with a combination of custom software and modification of the iTunes binary. He published the software and offsets for others to use.[319]

Unlike the first generation iPhone, the iPhone 3G must be activated in the store in most countries.[320] This makes the iPhone 3G more difficult, but not impossible, to hack. The need for in-store activation, as well as the huge number of first-generation iPhone and iPod Touch users upgrading to iPhone OS 2.0, caused a worldwide overload of Apple’s servers on July 11, 2008, the day on which both the iPhone 3G and iPhone OS 2.0 updates as well as MobileMe were released. After the update, devices were required to connect to Apple’s servers to authenticate the update, causing many devices to be temporarily unusable.[321]

Users on the O2 network in the United Kingdom, however, can buy the phone online and activate it via iTunes as with the previous model.[322] Even where not required, vendors usually offer activation for the buyer’s convenience. In the US, Apple has begun to offer free shipping on both the iPhone 3G and the iPhone 3GS (when available), reversing the in-store activation requirement.Best Buy and Walmart will also sell the iPhone.[323]

Unapproved third-party software and jailbreaking

The iPhone’s operating system is designed to only run software that has an Apple-approved cryptographic signature. This restriction can be overcome by “jailbreaking” the phone,[324] which involves replacing the iPhone’s firmware with a slightly modified version that does not enforce the signature check. Doing so may be a circumvention of Apple’s technical protection measures.[325]Apple, in a statement to the United States Copyright Office in response to Electronic Frontier Foundation (EFF) lobbying for a DMCA exception for this kind of hacking, claimed that jailbreaking the iPhone would be copyright infringement due to the necessary modification of system software.[326] However in 2010 Jailbreaking was declared officially legal in the United States by theDMCA.[327] Jailbroken iPhones may be susceptible to computer viruses, but few such incidents have been reported.[328][329]

iOS and Android 2.3.3 ‘Gingerbread’ may be set up to dual boot on a jailbroken iPhone with the help of OpeniBoot or iDroid.[330][331]

SIM unlocking

United States

iPhone 3G shown with the SIM tray partially ejected.

Most iPhones were and are still sold with a SIM lock, which restricts the use of the phone to one particular carrier, a common practice with subsidizedGSM phones. Unlike most GSM phones however, the phone cannot be officially unlocked by entering a code.[332] The locked/unlocked state is maintained on Apple’s servers per IMEI and is set when the iPhone is activated.

While the iPhone was initially sold in the US only on the AT&T network with a SIM lock in place, various hackers have found methods to “unlock” the phone from a specific network.[333] Although AT&T, Sprint and Verizon are the only authorized iPhone carriers in the United States, unlocked iPhones can be used with other carriers after unlocking.[334] For example, an unlocked iPhone may be used on the T-Mobile network in the US but, while an unlocked iPhone is compatible with T-Mobile’s voice network, it may not be able to make use of 3G functionality (i.e., no mobile web or e-mail, etc.).[335][not in citation given] More than a quarter of the original 1st generation iPhones sold in the US were not registered with AT&T. Apple speculates that they were likely shipped overseas and unlocked, a lucrative market before the iPhone 3G’s worldwide release.[36][336][337]

On March 26, 2009, AT&T in the United States began selling the iPhone without a contract, though still SIM-locked to their network.[338] The up-front purchase price of such iPhone units is often twice as expensive as those bundled with contracts.[339] Outside of the United States, policies differ, especially in US territories and insular areas like Guam, where GTA Teleguam is the exclusive carrier for the iPhone, since none of the three US carriers (AT&T, Sprint, and Verizon) has a presence in the area.[340]

Beginning April 8, 2012, AT&T began offering a factory SIM unlock option (which Apple calls a “whitelisting”, allowing it to be used on any carrier the phone supports) for iPhone owners.[341]

It has been reported that the Verizon iPhone 5 comes factory unlocked. After such discovery, Verizon announced that the Verizon iPhone 5 would remain unlocked, due to the regulations that the FCC had placed on the 700 MHz C-Block spectrum, which is utilized by Verizon.[342]

United Kingdom

In the United Kingdom, networks O2EE3Vodafone, as well as MVNO Tesco Mobile sell the device under subsidised contracts, or for use on pay as you go. They are locked to the network initially, though are usually able to be unlocked either after a certain period of contract length has passed, or for a small fee. However, all current versions of iPhone are available for purchase SIM-free from the Apple Store or Apple’s Online Store, consequently, they are unlocked for use on any GSM network too.[343]

Australia and other countries

Five major carriers in Australia, (ThreeOptusTelstraVirgin Mobile, and Vodafone),[344] offer legitimate unlocking, now at no cost for all iPhone devices, both current and prior models. The iPhone 3GS and the iPhone 4 can also be bought unlocked from Apple Retail Stores or the Apple Online Store.[141]

Internationally, policies vary, but many carriers sell the iPhone unlocked for full retail price.[141]

Legal battle over brand name

Mexico

In 2003, four years before the iPhone was officially introduced, the trademark iFone was registered in Mexico by a communications systems and services company, iFone.[345] Apple tried to gain control over its brandname, but a Mexican court refused the request. The case began in 2009, when the Mexican firm sued Apple. The Supreme court of Mexico upheld that iFone is the rightful owner, and held that Apple iPhone is a trademark violation.[346]

Brazil

Also in Brazil the brand IPHONE has been registered in 2000 by the company then called Gradiente Eletrônica S.A., now IGB Eletrônica S.A. According to the filing, Gradiente foresaw the revolution in the convergence of voice and data over the Internet at the time.[347]

In Brazil, the final battle over the brandname was finished in 2008. In December 18, 2012, IGB launched its own line of Android smartphones under the tradename to which it has exclusive rights in the local market.[347] In February 2013, the Brazilian Patent and Trademark Office, (known as “Instituto Nacional Da Propriedade Industrial”) issued a ruling that Gradiente Eletrônica, not Apple, owned the “iPhone” mark in Brazil. The “iPhone” term was registered by Gradiente in 2000, 7 years prior to Apple’s release of its iPhone. This decision came 3 months after Gradiente Eletrônica launched a lower-cost smartphone using the iPhone brand.[348]

C++

Posted: January 16, 2014 in Computer language
Tags:

C++ (pronounced see plus plus) is a programming language that is general purpose, statically typedfree-formmulti-paradigm and compiled. It is regarded as an intermediate-level language, as it comprises both high-level and low-level language features.[3] Developed by Bjarne Stroustrupstarting in 1979 at Bell Labs, C++ was originally named C with Classes, adding object oriented features, such as classes, and other enhancements to the C programming language. The language was renamed C++ in 1983,[4] as a pun involving the increment operator.

C++ is one of the most popular programming languages[5][6] and is implemented on a wide variety of hardware and operating system platforms. As an efficient compiler to native code, its application domains include systems software, application software, device drivers, embedded software, high-performance server and client applications, and entertainment software such as video games.[7] Several groups provide both free and proprietary C++compiler software, including the GNU ProjectLLVMMicrosoft and Intel. C++ has greatly influenced many other popular programming languages, most notably C#[2] and Java.

The language began as enhancements to C, first adding classes, then virtual functionsoperator overloadingmultiple inheritancetemplates andexception handling, among other features. After years of development, the C++ programming language standard was ratified in 1998 as ISO/IEC 14882:1998. The standard was amended by the 2003 technical corrigendumISO/IEC 14882:2003. The current standard extending C++ with new features was ratified and published by ISO in September 2011 as ISO/IEC 14882:2011 (informally known as C++11).[8]

History

Bjarne Stroustrup, creator of C++

Bjarne Stroustrup, a Danish and British trained computer scientist, began his work on “C with Classes” in 1979.[4] The idea of creating a new language originated from Stroustrup’s experience in programming for his Ph.D. thesis. Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing. Remembering his Ph.D. experience, Stroustrup set out to enhance the C language with Simula-like features.[9] C was chosen because it was general-purpose, fast, portable and widely used. Besides C and Simula, some other languages that inspired him were ALGOL 68AdaCLU andML. At first, the class, derived class, strong typinginlining, and default argument features were added to C via Stroustrup’s “C with Classes” to C compiler, Cpre.[10]

In 1983, the name of the language was changed from C with Classes to C++ (++ being the increment operator in C). New features were added including virtual functions, function name and operator overloading, references, constants, user-controlled free-store memory control, improved type checking, and BCPL style single-line comments with two forward slashes (//), as well as the development of a proper compiler for C++, Cfront. In 1985, the first edition of The C++ Programming Language was released, providing an important reference to the language, as there was not yet an official standard.[11] The first commercial implementation of C++ was released in October of the same year.[12] Release 2.0 of C++ came in 1989 and the updated second edition of The C++ Programming Language was released in 1991.[13] New features included multiple inheritance, abstract classes, static member functions, const member functions, and protected members. In 1990, The Annotated C++ Reference Manual was published. This work became the basis for the future standard. Late feature additions included templatesexceptionsnamespaces, new casts, and a Boolean type.

As the C++ language evolved, the standard library evolved with it. The first addition to the C++ standard library was the stream I/O library which provided facilities to replace the traditional C functions such as printf and scanf. Later, among the most significant additions to the standard library, was a large amount of the Standard Template Library.

It is possible to write object oriented or procedural code in the same program in C++. This has caused some concern that some C++ programmers are still writing procedural code, but are under the impression that it is object oriented, simply because they are using C++. Often it is an amalgamation of the two. This usually causes most problems when the code is revisited or the task is taken over by another coder.[14]

C++ continues to be used and is one of the preferred programming languages to develop professional applications.[15]

Etymology

According to Stroustrup: “the name signifies the evolutionary nature of the changes from C”.[16] During C++’s development period, the language had been referred to as “new C”, then “C with Classes”. The final name is credited to Rick Mascitti (mid-1983)[10] and was first used in December 1983. When Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. It stems from C’s “++” operator (which increments the value of a variable) and a common naming convention of using “+” to indicate an enhanced computer program. A joke goes that the name itself has a bug: due to the use of post-increment, which increments the value of the variable but evaluates to the unincremented value, C++ is no better than C, and the pre-increment ++C form should have been used instead.[17] There is no language called “C plus”. ABCL/c+ was the name of an earlier, unrelated programming language. A few other languages have been named similarly to C++, most notably C– and C#.

Philosophy

Throughout C++’s life, its development and evolution has been informally governed by a set of rules that its evolution should follow:[9]

  • It must be driven by actual problems and its features should be useful immediately in real world programmes.
  • Every feature should be implementable (with a reasonably obvious way to do so).
  • Programmers should be free to pick their own programming style, and that style should be fully supported by C++.
  • Allowing a useful feature is more important than preventing every possible misuse of C++.
  • It should provide facilities for organising programmes into well defined separate parts, and provide facilities for combining separately developed parts.
  • No implicit violations of the type system (but allow explicit violations that have been explicitly asked for by the programmer).
  • Make user created types have equal support and performance to built in types.
  • Any features that you do not use you do not pay for (e.g. in performance).
  • There should be no language beneath C++ (except assembly language).
  • C++ should work alongside other pre-existing programming languages, rather than being part of its own separate and incompatible programming environment.
  • If what the programmer wants to do is unknown, allow the programmer to specify (provide manual control).

Standardization

Year C++ Standard Informal name
1998 ISO/IEC 14882:1998[18] C++98
2003 ISO/IEC 14882:2003[19] C++03
2007 ISO/IEC TR 19768:2007[20] C++TR1
2011 ISO/IEC 14882:2011[21] C++11

In 1998, the C++ standards committee (the ISO/IEC JTC1/SC22/WG21 working group) standardized C++ and published the international standard ISO/IEC 14882:1998 (informally known asC++98). For some years after the official release of the standard, the committee processed defect reports, and in 2003 published a corrected version of the C++ standard, ISO/IEC 14882:2003. In 2005, a technical report, called the “Library Technical Report 1” (often known as TR1 for short), was released. While not an official part of the standard, it specified a number of extensions to the standard library, which were expected to be included in the next version of C++.

The latest major revision of the C++ standard, C++11, (formerly known as C++0x) was approved by ISO/IEC on 12 August 2011.[22] It has been published as 14882:2011.[23] There are plans for a minor (C++14) and a major revision (C++17) in the future.[24]

C++14 is the name being used for the next revision. C++14 is planned to be a small extension over C++11, featuring mainly bug fixes and small improvements, similarly to how C++03 was a small extension to C++98. While the name ‘C++14’ implies a release in 2014, this date is not fixed.

Language

C++ inherits most of C’s syntax. The following is Bjarne Stroustrup’s version of the Hello world program that uses the C++ Standard Library stream facility to write a message to standard output:[25][26]

# include <iostream>
 
int main()
{
   std::cout << "Hello, world!\n";
}

Within functions that define a non-void return type, failure to return a value before control reaches the end of the function results in undefined behaviour (compilers typically provide the means to issue a diagnostic in such a case).[27] The sole exception to this rule is the main function, which implicitly returns a value of zero.[28]

Operators and operator overloading

Operators that cannot be overloaded
Operator Symbol
Scope resolution operator  ::
Conditional operator  ?:
dot operator  .
Member selection operator  .*
“sizeof” operator  sizeof
“typeid” operator  typeid

C++ provides more than 35 operators, covering basic arithmetic, bit manipulation, indirection, comparisons, logical operations and others. Almost all operators can be overloaded for user-defined types, with a few notable exceptions such as member access (. and .*) as well as the conditional operator. The rich set of overloadable operators is central to using user created types in C++ as well and as easily as built in types (so that the user using them cannot tell the difference). The overloadable operators are also an essential part of many advanced C++ programming techniques, such as smart pointers. Overloading an operator does not change the precedence of calculations involving the operator, nor does it change the number of operands that the operator uses (any operand may however be ignored by the operator, though it will be evaluated prior to execution). Overloaded “&&” and “||” operators lose their short-circuit evaluation property.

Memory management

C++ supports four types of memory management:

  • Static memory allocation. A static variable is assigned a value at compile-time, and allocated storage in a fixed location along with the executable code. These are declared with the “static” keyword (in the sense of static storage, not in the sense of declaring a class variable).
  • Automatic memory allocation. An automatic variable is simply declared with its class name, and storage is allocated on the stack when the value is assigned. The constructor is called when the declaration is executed, the destructor is called when the variable goes out of scope, and after the destructor the allocated memory is automatically freed.
  • Dynamic memory allocation. Storage can be dynamically allocated on the heap using manual memory management – normally calls to new and delete (though old-style C calls such as malloc() and free() are still supported).
  • With the use of a library, garbage collection is possible. The Boehm garbage collector is commonly used for this purpose.

The fine control over memory management is similar to C, but in contrast with languages that intend to hide such details from the programmer, such as Java, Perl, PHP, and Ruby.

Templates

C++ templates enable generic programming. C++ supports both function and class templates. Templates may be parameterized by types, compile-time constants, and other templates. Templates are implemented by instantiation at compile-time. To instantiate a template, compilers substitute specific arguments for a template’s parameters to generate a concrete function or class instance. Some substitutions are not possible; these are eliminated by an overload resolution policy described by the phrase “Substitution failure is not an error” (SFINAE). Templates are a powerful tool that can be used for generic programmingtemplate metaprogramming, and code optimization, but this power implies a cost. Template use may increase code size, because each template instantiation produces a copy of the template code: one for each set of template arguments, however, this is the same amount of code that would be generated, or less, that if the code was written by hand.[29] This is in contrast to run-time generics seen in other languages (e.g., Java) where at compile-time the type is erased and a single template body is preserved.

Templates are different from macros: while both of these compile-time language features enable conditional compilation, templates are not restricted to lexical substitution. Templates are aware of the semantics and type system of their companion language, as well as all compile-time type definitions, and can perform high-level operations including programmatic flow control based on evaluation of strictly type-checked parameters. Macros are capable of conditional control over compilation based on predetermined criteria, but cannot instantiate new types, recurse, or perform type evaluation and in effect are limited to pre-compilation text-substitution and text-inclusion/exclusion. In other words, macros can control compilation flow based on pre-defined symbols but cannot, unlike templates, independently instantiate new symbols. Templates are a tool for static polymorphism (see below) and generic programming.

In addition, templates are a compile time mechanism in C++ that is Turing-complete, meaning that any computation expressible by a computer program can be computed, in some form, by atemplate metaprogram prior to runtime.

In summary, a template is a compile-time parameterized function or class written without knowledge of the specific arguments used to instantiate it. After instantiation, the resulting code is equivalent to code written specifically for the passed arguments. In this manner, templates provide a way to decouple generic, broadly applicable aspects of functions and classes (encoded in templates) from specific aspects (encoded in template parameters) without sacrificing performance due to abstraction.

Objects

Main article: C++ classes

C++ introduces object-oriented programming (OOP) features to C. It offers classes, which provide the four features commonly present in OOP (and some non-OOP) languages: abstraction,encapsulationinheritance, and polymorphism. One distinguishing feature of C++ classes compared to classes in other programming languages is support for deterministic destructors, which in turn provide support for the Resource Acquisition is Initialization (RAII) concept.

Encapsulation

Encapsulation is the hiding of information to ensure that data structures and operators are used as intended and to make the usage model more obvious to the developer. C++ provides the ability to define classes and functions as its primary encapsulation mechanisms. Within a class, members can be declared as either public, protected, or private to explicitly enforce encapsulation. A public member of the class is accessible to any function. A private member is accessible only to functions that are members of that class and to functions and classes explicitly granted access permission by the class (“friends”). A protected member is accessible to members of classes that inherit from the class in addition to the class itself and any friends.

The OO principle is that all of the functions (and only the functions) that access the internal representation of a type should be encapsulated within the type definition. C++ supports this (via member functions and friend functions), but does not enforce it: the programmer can declare parts or all of the representation of a type to be public, and is allowed to make public entities that are not part of the representation of the type. Therefore, C++ supports not just OO programming, but other weaker decomposition paradigms, like modular programming.

It is generally considered good practice to make all data private or protected, and to make public only those functions that are part of a minimal interface for users of the class. This can hide the details of data implementation, allowing the designer to later fundamentally change the implementation without changing the interface in any way.[30][31]

Inheritance

Inheritance allows one data type to acquire properties of other data types. Inheritance from a base class may be declared as public, protected, or private. This access specifier determines whether unrelated and derived classes can access the inherited public and protected members of the base class. Only public inheritance corresponds to what is usually meant by “inheritance”. The other two forms are much less frequently used. If the access specifier is omitted, a “class” inherits privately, while a “struct” inherits publicly. Base classes may be declared as virtual; this is called virtual inheritance. Virtual inheritance ensures that only one instance of a base class exists in the inheritance graph, avoiding some of the ambiguity problems of multiple inheritance.

Multiple inheritance is a C++ feature not found in most other languages, allowing a class to be derived from more than one base class; this allows for more elaborate inheritance relationships. For example, a “Flying Cat” class can inherit from both “Cat” and “Flying Mammal”. Some other languages, such as C# or Java, accomplish something similar (although more limited) by allowing inheritance of multiple interfaces while restricting the number of base classes to one (interfaces, unlike classes, provide only declarations of member functions, no implementation or member data). An interface as in C# and Java can be defined in C++ as a class containing only pure virtual functions, often known as an abstract base class or “ABC”. The member functions of such an abstract base class are normally explicitly defined in the derived class, not inherited implicitly. C++ virtual inheritance exhibits an ambiguity resolution feature called dominance.

Polymorphism

Polymorphism enables one common interface for many implementations, and for objects to act differently under different circumstances.

C++ supports several kinds of static (compile-time) and dynamic (run-timepolymorphisms. Compile-time polymorphism does not allow for certain run-time decisions, while run-time polymorphism typically incurs a performance penalty.

Static polymorphism

Function overloading allows programs to declare multiple functions having the same name (but with different arguments). The functions are distinguished by the number or types of their formal parameters. Thus, the same function name can refer to different functions depending on the context in which it is used. The type returned by the function is not used to distinguish overloaded functions and would result in a compile-time error message.

When declaring a function, a programmer can specify for one or more parameters a default value. Doing so allows the parameters with defaults to optionally be omitted when the function is called, in which case the default arguments will be used. When a function is called with fewer arguments than there are declared parameters, explicit arguments are matched to parameters in left-to-right order, with any unmatched parameters at the end of the parameter list being assigned their default arguments. In many cases, specifying default arguments in a single function declaration is preferable to providing overloaded function definitions with different numbers of parameters.

Templates in C++ provide a sophisticated mechanism for writing generic, polymorphic code. In particular, through the Curiously Recurring Template Pattern, it’s possible to implement a form of static polymorphism that closely mimics the syntax for overriding virtual functions. Because C++ templates are type-aware and Turing-complete, they can also be used to let the compiler resolve recursive conditionals and generate substantial programs through template metaprogramming. Contrary to some opinion, template code will not generate a bulk code after compilation with the proper compiler settings.[29]

Dynamic polymorphism

Inheritance

Variable pointers (and references) to a base class type in C++ can refer to objects of any derived classes of that type in addition to objects exactly matching the variable type. This allows arrays and other kinds of containers to hold pointers to objects of differing types. Because assignment of values to variables usually occurs at run-time, this is necessarily a run-time phenomenon.

C++ also provides a dynamic_cast operator, which allows the program to safely attempt conversion of an object into an object of a more specific object type (as opposed to conversion to a more general type, which is always allowed). This feature relies on run-time type information (RTTI). Objects known to be of a certain specific type can also be cast to that type withstatic_cast, a purely compile-time construct that has no runtime overhead and does not require RTTI.

Virtual member functions

Ordinarily, when a function in a derived class overrides a function in a base class, the function to call is determined by the type of the object. A given function is overridden when there exists no difference in the number or type of parameters between two or more definitions of that function. Hence, at compile time, it may not be possible to determine the type of the object and therefore the correct function to call, given only a base class pointer; the decision is therefore put off until runtime. This is called dynamic dispatchVirtual member functions or methods[32] allow the most specific implementation of the function to be called, according to the actual run-time type of the object. In C++ implementations, this is commonly done using virtual function tables. If the object type is known, this may be bypassed by prepending a fully qualified class name before the function call, but in general calls to virtual functions are resolved at run time.

In addition to standard member functions, operator overloads and destructors can be virtual. A general rule of thumb is that if any functions in the class are virtual, the destructor should be as well. As the type of an object at its creation is known at compile time, constructors, and by extension copy constructors, cannot be virtual. Nonetheless a situation may arise where a copy of an object needs to be created when a pointer to a derived object is passed as a pointer to a base object. In such a case, a common solution is to create a clone() (or similar) virtual function that creates and returns a copy of the derived class when called.

A member function can also be made “pure virtual” by appending it with = 0 after the closing parenthesis and before the semicolon. A class containing a pure virtual function is called anabstract data type. Objects cannot be created from abstract data types; they can only be derived from. Any derived class inherits the virtual function as pure and must provide a non-pure definition of it (and all other pure virtual functions) before objects of the derived class can be created. A program that attempts to create an object of a class with a pure virtual member function or inherited pure virtual member function is ill-formed.

Standard library

The C++ standard consists of two parts: the core language and the C++ Standard Library; which C++ programmers expect on every major implementation of C++, it includes vectors, lists, maps, algorithms (find, for_each, binary_search, random_shuffle, etc.), sets, queues, stacks, arrays, tuples, input/output facilities (iostream; reading from the console input, reading/writing from files), smart pointers for automatic memory management, regular expression support, multi-threading library, atomics support (allowing a variable to be read or written to be at most one thread at a time without any external synchronisation), time utilities (measurement, getting current time, etc.), a system for converting error reporting that doesn’t use C++ exceptions into C++exceptions, a random number generator and a slightly modified version of the C standard library (to make it comply with the C++ type system).

A large part of the C++ library is based on the STL. This provides useful tools as containers (for example vectors and lists), iterators to provide these containers with array-like access andalgorithms to perform operations such as searching and sorting. Furthermore (multi)maps (associative arrays) and (multi)sets are provided, all of which export compatible interfaces. Therefore it is possible, using templates, to write generic algorithms that work with any container or on any sequence defined by iterators. As in C, the features of the library are accessed by using the#include directive to include a standard header. C++ provides 105 standard headers, of which 27 are deprecated.

The standard incorporates the STL was originally designed by Alexander Stepanov, who experimented with generic algorithms and containers for many years. When he started with C++, he finally found a language where it was possible to create generic algorithms (e.g., STL sort) that perform even better than, for example, the C standard library qsort, thanks to C++ features like using inlining and compile-time binding instead of function pointers. The standard does not refer to it as “STL”, as it is merely a part of the standard library, but the term is still widely used to distinguish it from the rest of the standard library (input/output streams, internationalization, diagnostics, the C library subset, etc.).

Most C++ compilers, and all major ones, provide a standards conforming implementation of the C++ standard library.

Parsing and processing C++ source code

It is relatively difficult to write a good C++ parser with classic parsing algorithms such as LALR(1).[33] This is partly the result of the C++ grammar not being LALR. Because of this, there are very few tools for analyzing or performing non-trivial transformations (e.g., refactoring) of existing code. One way to handle this difficulty is to choose a different syntax. More powerful parsers, such asGLR parsers, can be substantially simpler (though slower).

Parsing (in the literal sense of producing a syntax tree) is not the most difficult problem in building a C++ processing tool. Such tools must also have the same understanding of the meaning of the identifiers in the program as a compiler might have. Practical systems for processing C++ must then not only parse the source text, but be able to resolve for each identifier precisely which definition applies (e.g., they must correctly handle C++’s complex scoping rules) and what its type is, as well as the types of larger expressions.

Finally, a practical C++ processing tool must be able to handle the variety of C++ dialects used in practice (such as that supported by the GNU Compiler Collection and that of Microsoft’s Visual C++) and implement appropriate analyzers, source code transformers, and regenerate source text. Combining advanced parsing algorithms such as GLR with symbol table construction andprogram transformation machinery can enable the construction of arbitrary C++ tools.

Parsers do exist in all major compilers. Despite that only one compiler provides the parser in a format suitable for tool integration, Clang,[34] the parser is usable as a C++ (or C) library which is ready for integration into, i.e. an IDE.

Compatibility

Producing a reasonably standards-compliant C++ compiler has proven to be a difficult task for compiler vendors in general. For many years, different C++ compilers implemented the C++ language to different levels of compliance to the standard, and their implementations varied widely in some areas such as partial template specialization. Recent releases of most popular C++ compilers support almost all of the C++ 1998 standard.[35]

To give compiler vendors greater freedom, the C++ standards committee decided not to dictate the implementation of name manglingexception handling, and other implementation-specific features. The downside of this decision is that object code produced by different compilers is expected to be incompatible. There were, however, attempts to standardize compilers for particular machines or operating systems (for example C++ ABI),[36] though they seem to be largely abandoned now.

Exported templates

One particular point of contention is the export keyword, intended to allow template definitions to be separated from their declarations. The first widely available compiler to implement exportwas Comeau C/C++, in early 2003 (five years after the release of the standard); in 2004, the beta compiler of Borland C++ Builder X was also released with export. Both of these compilers are based on the EDG C++ front end. Other compilers such as GCC do not support it at all. Beginning ANSI C++ by Ivor Horton provides example code with the keyword that will not compile in most compilers, without reference to this problem. Herb Sutter, former convener of the C++ standards committee, recommended that export be removed from future versions of the C++ standard.[37] During the March 2010 ISO C++ standards meeting, the C++ standards committee voted to remove exported templates entirely from C++11, but reserve the keyword for future use.[38]

With C

For more details on this topic, see Compatibility of C and C++.
Ambox current red.svg
This section’s factual accuracy may be compromised due to out-of-date information. Please update this article to reflect recent events or newly available information. (September 2011)

C++ is often considered to be a superset of C, but this is not strictly true.[39] Most C code can easily be made to compile correctly in C++, but there are a few differences that cause some valid C code to be invalid or behave differently in C++.

One commonly encountered difference is that C allows implicit conversion from void* to other pointer types, but C++ does not (for type safety reasons). Another common portability issue is that C++ defines many new keywords, such as new and class, which may be used as identifiers (e.g. variable names) in a C program.

Some incompatibilities have been removed by the 1999 revision of the C standard (C99), which now supports C++ features such as line comments (//), and declarations mixed with code. On the other hand, C99 introduced a number of new features that C++ did not support, were incompatible or redundant in C++, such as variable-length arrays, native complex-number types (use std::complex class that is, and was also there before C99 existed, in the C++ standard library), designated initializers (use constructors instead), compound literals, the boolean typedef (in C++ it is a fundamental type) and the restrict keyword.[40] Some of the C99-introduced features were included in the subsequent version of the C++ standard, C++11:[41][42][43]

  • C99 preprocessor additions [44]
    • variadic macros
    • concatenation of adjacent narrow/wide string literals
    • _Pragma()
  • long long
  • __func__
  • Headers:
    • cstdbool (stdbool.h)
    • cstdint (stdint.h)
    • cinttypes (inttypes.h).

To intermix C and C++ code, any function declaration or definition that is to be called from/used both in C and C++ must be declared with C linkage by placing it within an extern "C" {/*...*/} block. Such a function may not rely on features depending on name mangling (i.e., function overloading)

 

Computer

Posted: January 16, 2014 in Computer
Tags:

History

Although rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were invented in the 17th century, the first ‘computers’ were conceived of in the 19th century, and only emerged in their modern form in the 1940s.

First general-purpose computing device

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the “father of the computer“,[4] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unitcontrol flow in the form of conditional branching and loops, and integratedmemory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[5][6]

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Analog computers

Sir William Thomson‘s third tide-predicting machine design, 1879-81

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[7]

The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[8]

The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious.

The modern computer

Alan Turing was the first to conceptualize the modern computer, a device that became known as the Universal Turing machine.

The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[9] On Computable Numbers. Turing reformulated Kurt Gödel‘s 1931 results on the limits of proof and computation, replacing Gödel’s universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.

He also introduced the notion of a ‘Universal Machine’ (now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[10] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to beTuring-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Electromechanical computers

Replica of Zuse‘s Z3, the first fully automatic, digital (electromechanical) computer.

Early digital computers were electromechanical – electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[11]

In 1941, Zuse followed his earlier machine up with the Z3, the world’s first working electromechanical programmable, fully automatic digital computer.[12][13] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[14] Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage‘s earlier design) by the simpler binary system meant that Zuse’s machines were easier to build and potentially more reliable, given the technologies available at that time.[15] The Z3 was probably a completeTuring machine.

Electronic programmable computer

Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in Dollis Hill in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[7] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[16] The first electronic digital calculating device.[17] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[18]

Colossus was the first electronic digitalprogrammable computing device, and was used to break German ciphers during World War II.

During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated GermanLorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[18] He spent eleven months from early February 1943 designing and building the first Colossus.[19] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[20] and attacked its first message on 5 February.[18]

Colossus was the world’s first electronic digital programmable computer.[7] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process.[21][22]

ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for the United States Army.

The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a “program” on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.

It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[23]

Stored program computer

Three tall racks containing electronic circuit boards

A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[18] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic Calculator’ was the first specification for such a device. John von Neumann at theUniversity of Pennsylvania, also circulated his First Draft of a Report on the EDVAC in 1945.[7]

Ferranti Mark 1, c. 1951.

The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world’s first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. WilliamsTom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[24] It was designed as a testbed for the Williams tubethe first random-access digital storage device.[25] Although the computer was considered “small and primitive” by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[26] As soon as the SSEM had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1.

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world’s first commercially available general-purpose computer.[27] Built byFerranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[28] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 [29] and ran the world’s first regular routine office computer job.

Transistor computers

The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistorsinstead of valves.[30] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[31] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[32][33]

The integrated circuit

The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of DefenceGeoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[34]

Jack Kilby‘s original integrated circuit.

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[35] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[36] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated.”[37] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[38] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made ofgermanium.

This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term “microprocessor”, it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[39] designed and realized by Ted HoffFederico Faggin, and Stanley Mazor at Intel.[40]

Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language.

In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

Replica of the Small-Scale Experimental Machine (SSEM), the world’s first stored-program computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine-based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer’s memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

      mov No. 0, sum     ; set sum to 0
      mov No. 1, num     ; set num to 1
loop: add num, sum    ; add num to sum
      add No. 1, num     ; add 1 to num
      cmp num, #1000  ; compare num to 1000
      ble loop        ; if num <= 1000, go back to 'loop'
      halt            ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[41]

Bugs

Main article: Software bug

The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer

Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer’s proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program’s design.[42]

Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[43]

Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcodefor short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer’s memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer’s memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after theHarvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[44] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer’s assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.

A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.

Programming language

Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[45]

Higher-level languages

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[46] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Program design

Question book-new.svg
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed(July 2012)

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.

Components

Video demonstrating the standard components of a “slimline” computer

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.

Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit(CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a singleintegrated circuit called a microprocessor.

Control unit

Main articles: CPU design and Control unit

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system

The control unit (often called a control system or central controller) manages the computer’s various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[47] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[48]

The control system’s function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.

Arithmetic logic unit (ALU)

Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic.[49]

The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and returnboolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).

Logic operations involve Boolean logicANDORXOR and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[50] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

Main article: Computer data storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.

A computer’s memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software’s responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two’s complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer’s speed.

Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer’s initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer’s operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[51]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer’s part.

Input/output (I/O)

Main article: Input/output

Hard disk drives are common storage devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[52] Devices that provide input or output to the computer are calledperipherals.[53] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as thedisplay and printerHard disk drivesfloppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

Main article: Computer multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[54]

One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[55]

Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Multiprocessing

Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputersmainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[56] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulationgraphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

Networking and the Internet

Main articles: Computer networking and Internet

Visualization of a portion of the routeson the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military’s SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such asSabre.[57]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[58]The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet andADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Computer architecture paradigms

There are many types of computer architectures:

Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[59]

Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore any type of computer (netbooksupercomputercellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

Misconceptions

Main articles: Human computer and Harvard Computers

Women as computers in NACA High Speed Flight Station “Computer Room”

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[60] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[61] Any device which processes information qualifies as a computer, especially if the processing is purposeful.

Required technology

Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors made of photolithographedsemiconductors.

There is active research to make computers out of many promising new types of technology, such as optical computersDNA computersneural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (byquantum factoring) very quickly.

Capabilities of computers (In general)

1.) Ability to perform certain logical and mathematical functions.
2.) Ability to store data and/or information.
3.) Ability to retrieve data and/or information.
4.) Ability to search data and/or information.
5.) Ability to compare data and/or information.
6.) Ability to sort data and/or information.
7.) Ability to control errors.
8.) Ability to check itself.
9.) Ability to perform a set of tasks with speed and accuracy.
10.) Ability to do a set of tasks repetitively.
11.) Ability to provide new time dimensions.
12.) Excellent substitute for writing instrument and paper.

Limitations of computers (In general)

1.) Dependence on prepared set of instructions.
2.) Inability to derive meanings from objects.
3.) Inability to generate data and/or information on its own.
4.) Cannot correct wrong instructions.
5.) Dependence on electricity.
6.) Dependence on human interventions.
7.) Inability to decide on its own.
8.) Not maintenance-free.
9.) Limited to the processing speed of its interconnected peripherals.
10.) Limited to the available amount of storage on primary data storage devices.
11.) Limited to the available amount of storage on secondary data storage devices.
12.) Not a long-term investment.

Further topics

Artificial intelligence

A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning.

Hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware

First generation (mechanical/electromechanical) Calculators Pascal’s calculatorArithmometerDifference engineQuevedo’s analytical machines
Programmable devices Jacquard loomAnalytical engineIBM ASCC/Harvard Mark IHarvard Mark IIIBM SSECZ3
Second generation (vacuum tubes) Calculators Atanasoff–Berry ComputerIBM 604UNIVAC 60UNIVAC 120
Programmable devices ColossusENIACManchester Small-Scale Experimental MachineEDSACManchester Mark 1,Ferranti PegasusFerranti MercuryCSIRACEDVACUNIVAC IIBM 701IBM 702IBM 650Z22
Third generation (discrete transistors and SSI, MSI, LSI integrated circuits) Mainframes IBM 7090IBM 7080IBM System/360BUNCH
Minicomputer PDP-8PDP-11IBM System/32IBM System/36
Fourth generation (VLSI integrated circuits) Minicomputer VAXIBM System i
4-bit microcomputer Intel 4004Intel 4040
8-bit microcomputer Intel 8008Intel 8080Motorola 6800Motorola 6809MOS Technology 6502Zilog Z80
16-bit microcomputer Intel 8088Zilog Z8000WDC 65816/65802
32-bit microcomputer Intel 80386PentiumMotorola 68000ARM
64-bit microcomputer[62] AlphaMIPSPA-RISCPowerPCSPARCx86-64ARMv8-A
Embedded computer Intel 8048Intel 8051
Personal computer Desktop computerHome computerLaptop computerPersonal digital assistant (PDA), Portable computerTablet PCWearable computer
Theoretical/experimental Quantum computerChemical computerDNA computingOptical computerSpintronics based computer

Other hardware topics

Peripheral device (input/output) Input Mousekeyboardjoystickimage scannerwebcamgraphics tabletmicrophone
Output Monitorprinterloudspeaker
Both Floppy disk drivehard disk driveoptical disc drive, teleprinter
Computer busses Short range RS-232SCSIPCIUSB
Long range (computer networking) EthernetATMFDDI

Software

Main article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such asBIOS ROM in an IBM PC compatible), it is sometimes called “firmware.”

Operating system Unix and BSD UNIX System VIBM AIXHP-UXSolaris (SunOS), IRIXList of BSD operating systems
GNU/Linux List of Linux distributionsComparison of Linux distributions
Microsoft Windows Windows 95Windows 98Windows NTWindows 2000Windows MeWindows XPWindows VistaWindows 7Windows 8
DOS 86-DOS (QDOS), IBM PC DOSMS-DOSDR-DOSFreeDOS
Mac OS Mac OS classicMac OS X
Embedded and real-time List of embedded operating systems
Experimental AmoebaOberon/BluebottlePlan 9 from Bell Labs
Library Multimedia DirectXOpenGLOpenAL
Programming library C standard libraryStandard Template Library
Data Protocol TCP/IPKermitFTPHTTPSMTP
File format HTMLXMLJPEGMPEGPNG
User interface Graphical user interface(WIMP) Microsoft WindowsGNOMEKDEQNX PhotonCDEGEMAqua
Text-based user interface Command-line interfaceText user interface
Application Office suite Word processingDesktop publishingPresentation programDatabase management system, Scheduling & Time management, Spreadsheet,Accounting software
Internet Access BrowserE-mail clientWeb serverMail transfer agentInstant messaging
Design and manufacturing Computer-aided designComputer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
Graphics Raster graphics editorVector graphics editor3D modelerAnimation editor3D computer graphicsVideo editingImage processing
Audio Digital audio editorAudio playback, Mixing, Audio synthesisComputer music
Software engineering CompilerAssemblerInterpreterDebuggerText editorIntegrated development environmentSoftware performance analysisRevision control,Software configuration management
Educational EdutainmentEducational gameSerious gameFlight simulator
Games Strategy, Arcade, Puzzle, Simulation, First-person shooterPlatformMassively multiplayerInteractive fiction
Misc Artificial intelligenceAntivirus softwareMalware scannerInstaller/Package management systemsFile manager

Languages

There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming languages
Lists of programming languages Timeline of programming languagesList of programming languages by categoryGenerational list of programming languagesList of programming languagesNon-English-based programming languages
Commonly used assembly languages ARMMIPSx86
Commonly used high-level programming languages AdaBASICCC++C#COBOLFortranJavaLispPascalObject Pascal
Commonly used scripting languages Bourne scriptJavaScriptPythonRubyPHPPerl

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Computer-related professions
Hardware-related Electrical engineeringElectronic engineeringComputer engineeringTelecommunications engineeringOptical engineeringNanoengineering
Software-related Computer scienceComputer engineeringDesktop publishingHuman–computer interaction, Information technology, Information systemsComputational scienceSoftware engineeringVideo game industryWeb design

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations
Standards groups ANSIIECIEEEIETFISOW3C
Professional societies ACMAISIETIFIPBCS
Free/open source software groups Free Software FoundationMozilla FoundationApache Software Foundation

“Computer technology” and “Computer system” redirect here. For the company, see Computer Technology Limited. For other uses, see Computer (disambiguation) and Computer system (disambiguation).

Page semi-protected
Computer
Acer Aspire 8920 Gemstone by Georgy.JPGColumbia Supercomputer - NASA Advanced Supercomputing Facility.jpgIntertec Superbrain.jpg
2010-01-26-technikkrempel-by-RalfR-05.jpgThinking Machines Connection Machine CM-5 Frostburg 2.jpgG5 supplying Wikipedia via Gigabit at the Lange Nacht der Wissenschaften 2006 in Dresden.JPG
DM IBM S360.jpgAcorn BBC Master Series Microcomputer.jpgDell PowerEdge Servers.jpg

computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU) and some form ofmemory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit that can change the order of operations based on stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

In World War IImechanical analog computers were used for specialized military applications. During this time the first electronicdigital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1]

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys toindustrial robots are the most numerous.

 

 

Nokia

Posted: January 16, 2014 in Nokia
Tags:

Nokia Corporation[3] (FinnishNokia OyjSwedishNokia AbpFinnish pronunciation: [ˈnokiɑ]English /ˈnɒkiə/) is a Finnish communications and information technology multinational corporation that is headquartered in Espoo, Finland.[1] Its Nokia Solutions and Networks company providestelecommunications network equipment and services,[4] while Internet services, including applicationsgames, music, media and messaging, and free-of-charge digital map information and navigation services, are delivered through its wholly owned subsidiary Navteq.[5]

As of 2012, Nokia employs 101,982 people across 120 countries, conducts sales in more than 150 countries, and reports annual revenues of around €30 billion.[2] By the fourth quarter of 2012, it was the world’s second-largest mobile phone maker in terms of unit sales (after Samsung), with a global market share of 18.0%.[6] Now, Nokia only has a 3.2% market share in smartphones.[7] They lost 40% of their revenue in mobile phones in Q2 2013. Nokia is a public limited-liability company listed on the Helsinki Stock Exchange and New York Stock Exchange.[8] It is the world’s 274th-largest company measured by 2013 revenues according to the Fortune Global 500.[9]

Nokia was the world’s largest vendor of mobile phones from 1998 to 2012.[6] However, over the past five years its market share declined as a result of the growing use of touchscreen smartphones from other vendors—principally the iPhone, by Apple, and devices running on Android, an operating system created by Google. The corporation’s share price fell from a high of US$40 in late 2007 to under US$2 in mid-2012.[10][11] In a bid to recover, Nokia announced a strategic partnership with Microsoft in February 2011, leading to the replacement of Symbian with Microsoft’s Windows Phoneoperating system in all Nokia smartphones.[12] Following the replacement of the Symbian system, Nokia’s smartphone sales figures, which had previously increased, collapsed dramatically.[13] From the beginning of 2011 until 2013, Nokia fell from its position as the world’s largest smartphone vendor to assume the status of tenth largest.[14]

On 2 September 2013, Microsoft announced its intent to purchase Nokia’s mobile phone business unit as part of an overall deal totaling €5.44 billion (US$7.17 billion). Stephen Elop, Nokia’s former CEO, and several other executives will join Microsoft as part of the deal.[15][16]

Nokia
Nokia wordmark.svg
Type Julkinen osakeyhtiö
(Public company)
Traded as
Industry Telecommunications equipment
Internet
Computer software
Founded TampereGrand Duchy of Finland (1865)
incorporated in Nokia (1871)
Founder(s)
Headquarters Espoo, Finland[1]
Area served Worldwide
Key people
Products
Services Maps and navigation, music,messaging and media
Software solutions
(See services listing)
Revenue Decrease €30.176 billion (2012)[2]
Operating income Decrease € -2.303 billion (2012)[2]
Net income Decrease € -3.106 billion (2012)[2]
Total assets Decrease €29.949 billion (2012)[2]
Total equity Decrease €8.061 billion (2012)[2]
Employees 97,800 (2012)[2]
Divisions Mobile Solutions
Mobile Phones
Markets
Subsidiaries Nokia Solutions and Networks
Navteq
Website Nokia.com

History[edit]

1865 to 1967[edit]

Eduard Polòn taiteilija Eero Järnefeltin maalaamana

Fredrik Idestam, co-founder of Nokia.
Leo Mechelin, co-founder of Nokia.

The predecessors of the modern Nokia were the Nokia Company (Nokia Aktiebolag), Finnish Rubber Works Ltd (Suomen Gummitehdas Oy) andFinnish Cable Works Ltd (Suomen Kaapelitehdas Oy).[17]

Eduard Polón(1861-1930), Nokia’s founder, was a Finnish business leader ( Source: Nokia Corporation ‘s official history, pages 12–13, Martin Häikiö, Edita, 2001). He was founder, CEO, Chairman of the Board and the largest shareholder of the Finnish Gummitehdas’ (“Rubberfactory”). He led the development of a new rubber industry in Finland, and his group of companies built a modern wood and cable industry in Finland. Polón decided to use the name “Nokia”, the town where his factories were based, as a brand name for his products to differentiate his products from Russian competitors.[citation needed]

Although these three companies—Suomen Gummitehdas, Suomen Kaapelitehdas and Nokia Ab—were not formally merged, as the law did not allow it at the time, Eduard Polón continued to create a successful conglomerate that later became the Nokia PLC of today. Polòn was the chairman, managing director, and the largest owner of the group for 30 years.[citation needed]

Nokia Ab’s history started in 1865 when mining engineer Fredrik Idestam established a groundwood pulp mill on the banks of the Tammerkoski rapids in the town of Tampere, in southwestern Finland (part of the Russian Empire).[18] In 1868, Idestam built a second mill near the town of Nokia, fifteen kilometres (nine miles) west of Tampere, by the Nokianvirta river, which had better resources for hydropower production.[19] In 1871, Idestam, with the help of his close friend and statesman Leo Mechelin, renamed and transformed his firm into a share company, thereby founding Nokia Ab. However, the brand name Nokia of today did not come from the company name, but from the name of the town where Polón’s factories were located.[19]

Towards the end of the 19th century, Mechelin sought to expand into the electricity business, but his aspiration was initially thwarted by Idestam’s opposition. However, Idestam’s retirement from the management of the company in 1896 allowed Mechelin to become the company’s chairman (from 1898 until 1914), and he subsequently convinced most of the shareholders to receive their support.[19] In 1902, Nokia added electricity generation to its business activities.[18]

Industrial conglomerate[edit]

In 1898, Eduard Polón founded Finnish Rubber Works, manufacturer of galoshes and other rubber products, which later became Nokia’s rubber business.[17] At the beginning of the 20th century, Finnish Rubber Works established its factories near the town of Nokia and they began using Nokia as its product brand.[20] In 1912, Arvid Wickström founded Finnish Cable Works, producer of telephonetelegraph and electrical cables and the foundation of Nokia’s cable and electronics businesses.[17] At the end of the 1910s, shortly after World War I, the Nokia Company was nearing bankruptcy.[21] To ensure the continuation of electricity supply from Nokia’s generators, Finnish Rubber Works acquired the business of the insolvent company.[21] In 1922, Finnish Rubber Works acquired Finnish Cable Works.[22] In 1937, Verner Weckman, a sport wrestler and Finland’s first Olympic Gold medalist, became president of Finnish Cable Works, after 16 years as its technical director.[23] After World War II, Finnish Cable Works supplied cables to the Soviet Union as part of Finland’s war reparations. This gave the company a good foothold for later trade.[23]

The three companies, which had been jointly owned since 1922, were merged to form a new industrial conglomerate, Nokia Corporation in 1967 and paved the way for Nokia’s future as a global corporation.[24] The new company was involved in many industries, producing at one time or another paper products, car and bicycle tires, footwear (including rubber boots), communications cables, televisions and other consumer electronics, personal computers, electricity generation machinery, robotics, capacitorsmilitary communications and equipment (such as the SANLA M/90 device and the M61 gas mask for the Finnish Army), plastics, aluminum and chemicals.[25] Each business unit had its own director who reported to the first Nokia Corporation President,Björn Westerlund. As the president of the Finnish Cable Works, he had been responsible for setting up the company’s first electronics department in 1960, sowing the seeds of Nokia’s future in telecommunications.[26]

Eventually, the company decided to leave consumer electronics behind in the 1990s and focused solely on the fastest growing segments in telecommunications.[27] Nokian Tyres, manufacturer of tires, split from Nokia Corporation to form its own company in 1988[28] and two years later Nokian Footwear, manufacturer of rubber boots, was founded.[20] In 1989, Nokia also sold the original paper business; currently this company (Nokian Paperi) is owned by SCA. During the rest of the 1990s, Nokia divested itself of all of its non-telecommunications businesses.[27]

1967 to 2000[edit]

The seeds of the current incarnation of Nokia were planted with the founding of the electronics section of the cable division in 1960 and the production of its first electronic device in 1962: a pulse analyzer designed for use in nuclear power plants.[26] In the 1967 fusion, that section was separated into its own division, and began manufacturing telecommunications equipment. A key CEO and subsequent chairman of the board was vuorineuvos Björn “Nalle” Westerlund (1912–2009), who founded the electronics department and let it run at a loss for 15 years.

Networking equipment[edit]

A Nokia P30

In the 1970s, Nokia became more involved in the telecommunications industry by developing the Nokia DX 200, a digital switch for telephone exchanges. The DX 200 became the workhorse of the network equipment division. Its modular and flexible architecture enabled it to be developed into various switching products.[29] In 1984, development of a version of the exchange for the Nordic Mobile Telephony network was started.[30]

For a while in the 1970s, Nokia’s network equipment production was separated into Telefenno, a company jointly owned by the parent corporation and by a company owned by the Finnish state. In 1987, the state sold its shares to Nokia and in 1992 the name was changed to Nokia Telecommunications.[31]

In the 1970s and 1980s, Nokia developed the Sanomalaitejärjestelmä (“Message device system”), a digital, portable and encrypted text-based communications device for the Finnish Defence Forces.[32] The current main unit used by the Defence Forces is the Sanomalaite M/90 (SANLA M/90).[33]

In 1998, Check Point established a partnership with Nokia, which bundled Check Point’s Software with Nokia’s computer Network Security Appliances.[34]

First mobile phones[edit]

The Mobira Cityman 150, Nokia’s NMT-900 mobile phone from 1989 (left), compared to theNokia 1100 from 2003.[35] The Mobira Cityman line was launched in 1987.[36]

The technologies that preceded modern cellular mobile telephony systems were the various “0G” pre-cellular mobile radio telephony standards. Nokia had been producing commercial and some military mobile radio communications technology since the 1960s, although this part of the company was sold some time before the later company rationalization. Since 1964, Nokia had developed VHF radio simultaneously with Salora Oy. In 1966, Nokia and Salora started developing the ARP standard (which stands for Autoradiopuhelin, or car radio phone in English), a car-based mobile radio telephony system and the first commercially operated public mobile phone network in Finland. It went online in 1971 and offered 100% coverage in 1978.[37]

In 1979, the merger of Nokia and Salora resulted in the establishment of Mobira Oy. Mobira began developing mobile phones for the NMT (Nordic Mobile Telephony) network standard, the first-generation, first fully automatic cellular phone system that went online in 1981.[38] In 1982, Mobira introduced its first car phone, the Mobira Senator for NMT-450 networks.[38]

Nokia bought Salora Oy in 1984 and now owning 100% of the company, changed the company’s telecommunications branch name to Nokia-Mobira Oy. The Mobira Talkman, launched in 1984, was one of the world’s first transportable phones. In 1987, Nokia introduced one of the world’s first handheld phones, theMobira Cityman 900 for NMT-900 networks (which, compared to NMT-450, offered a better signal, yet a shorter roam). While the Mobira Senator of 1982 had weighed 9.8 kg (22 lb) and the Talkman just under 5 kg (11 lb), the Mobira Cityman weighed only 800 g (28 oz) with the battery and had a price tag of 24,000Finnish marks (approximately €4,560).[36] Despite the high price, the first phones were almost snatched from the sales assistants’ hands. Initially, the mobile phone was a “yuppie” product and a status symbol.[25]

Nokia’s mobile phones got a big publicity boost in 1987, when Soviet leader Mikhail Gorbachev was pictured using a Mobira Cityman to make a call fromHelsinki to his communications minister in Moscow. This led to the phone’s nickname of the “Gorba”.[36]

In 1988, Jorma Nieminen, resigning from the post of CEO of the mobile phone unit, along with two other employees from the unit, started a notable mobile phone company of their own, Benefon Oy (since renamed to GeoSentric).[39] One year later, Nokia-Mobira Oy became Nokia Mobile Phones.

Involvement in GSM[edit]

Nokia was one of the key developers of GSM (Global System for Mobile Communications),[40] the second-generation mobile technology which could carry data as well as voice traffic. NMT (Nordic Mobile Telephony), the world’s first mobile telephony standard that enabled international roaming, provided valuable experience for Nokia for its close participation in developing GSM, which was adopted in 1987 as the new European standard for digital mobile technology.[41][42]

Nokia delivered its first GSM network to the Finnish operator Radiolinja in 1989.[43] The world’s first commercial GSM call was made on 1 July 1991 in Helsinki, Finland over a Nokia-supplied network, by then Prime Minister of Finland Harri Holkeri, using a prototype Nokia GSM phone.[43] In 1992, the first GSM phone, the Nokia 1011, was launched.[43][44] The model number refers to its launch date, 10 November.[44] The Nokia 1011 did not yet employ Nokia’s characteristic ringtone, the Nokia tune. It was introduced as a ringtone in 1994 with the Nokia 2100 series.[45]

GSM’s high-quality voice calls, easy international roaming and support for new services like text messaging (Short Message Service) laid the foundations for a worldwide boom in mobile phone use.[43] GSM came to dominate the world of mobile telephony in the 1990s, in mid-2008 accounting for about three billion mobile telephone subscribers in the world, with more than 700 mobileoperators across 218 countries and territories. New connections are added at the rate of 15 per second, or 1.3 million per day.[46]

Personal computers and IT equipment[edit]

The Nokia Booklet 3G mini laptop.

In the 1980s, Nokia’s computer division Nokia Data produced a series of personal computers called MikroMikko.[47] MikroMikko was Nokia Data’s attempt to enter the business computer market. The first model in the line, MikroMikko 1, was released on 29 September 1981,[48] around the same time as the first IBM PC. However, the personal computer division was sold to the British ICL (International Computers Limited) in 1991, which later became part of Fujitsu.[49] MikroMikko remained a trademark of ICL and later Fujitsu. Internationally the MikroMikko line was marketed by Fujitsu as the ErgoPro.

Fujitsu later transferred its personal computer operations to Fujitsu Siemens Computers, which shut down its only factory in Espoo, Finland (in theKilo district, where computers had been produced since the 1960s) at the end of March 2000,[50][51] thus ending large-scale PC manufacturing in the country. Nokia was also known for producing very high quality CRT and early TFT LCD displays for PC and larger systems application. The Nokia Display Products’ branded business was sold to ViewSonic in 2000.[52] In addition to personal computers and displays, Nokia used to manufactureDSL modems and digital set-top boxes.

Nokia re-entered the PC market in August 2009 with the introduction of the Nokia Booklet 3G mini laptop.[53]

Challenges of growth[edit]

The Nokia House, Nokia’s head office located by the Gulf of Finland in Keilaniemi,Espoo, was constructed between 1995 and 1997. It is the workplace of more than 1,000 Nokia employees.[25]

In the 1980s, during the era of its CEO Kari Kairamo, Nokia expanded into new fields, mostly by acquisitions. In the late 1980s and early 1990s, the corporation ran into serious financial problems, a major reason being its heavy losses by the television manufacturing division and businesses that were just too diverse.[54] These problems, and a suspected total burnout, probably contributed to Kairamo taking his own life in 1988. After Kairamo’s death, Simo Vuorilehto became Nokia’s chairman and CEO. In 1990–1993, Finland underwent severe economic depression,[55] which also struck Nokia. Under Vuorilehto’s management, Nokia was severely overhauled. The company responded by streamlining its telecommunications divisions, and by divesting itself of the television and PC divisions.[56]

Probably the most important strategic change in Nokia’s history was made in 1992, however, when the new CEO Jorma Ollila made a crucial strategic decision to concentrate solely on telecommunications.[27] Thus, during the rest of the 1990s, the rubber, cable and consumer electronics divisions were gradually sold as Nokia continued to divest itself of all of its non-telecommunications businesses.[27]

As late as 1991, more than a quarter of Nokia’s turnover still came from sales in Finland. However, after the strategic change of 1992, Nokia saw a huge increase in sales to North America, South America and Asia.[57] The exploding worldwide popularity of mobile telephones, beyond even Nokia’s most optimistic predictions, caused a logistics crisis in the mid-1990s.[58] This prompted Nokia to overhaul its entire logistics operation.[59] By 1998, Nokia’s focus on telecommunications and its early investment in GSM technologies had made the company the world’s largest mobile phone manufacturer,[57] a position it would hold for the next 14 consecutive years until 2012. Between 1996 and 2001, Nokia’s turnover increased almost fivefold from 6.5 billion euros to 31 billion euros.[57] Logistics continues to be one of Nokia’s major advantages over its rivals, along with greater economies of scale.[60][61]

2000 to present[edit]

Product releases[edit]

The Nokia 3310 sold between 2000 and 2003, was arguably one of the most well known mobile phones.

Reduction in size of Nokia mobile phones

Nokia launched its Nokia 1100 handset in 2003,[35] with over 200 million units shipped, was the best-selling mobile phone of all time and the world’s top-selling consumer electronics product.[62] Nokia was one of the first players in the mobile space to recognize that there was a market opportunity in combining a game console and a mobile phone (both of which many gamers were carrying in 2003) into the N-Gage. The N-Gage was a mobile phone and game console meant to lure gamers away from the Game Boy Advance, though it cost twice as much.[63] The N-Gage was not a success, and from 2007 and 2008, Nokia started to offer an N-Gage service on existing Symbian S60 smartphones to play games.

Nokia Productions was the first ever mobile filmmaking project directed by Spike Lee. Work began in April 2008, and the film premiered in October 2008.[64]

In 2009, the company announced a high-end Windows-based netbook called the Nokia Booklet 3G.[53] On 2 September 2009, Nokia launched two new music and social networking phones, the X6 and X3.[65] The Nokia X6 featured 32 GB of on-board memory with a 3.2″ finger touch interface and comes with a music playback time of 35 hours. The Nokia X3 was the first series 40 Ovi Store-enabled device. The X3 was a music device that comes with stereo speakers, built-in FM radio, and a 3.2-megapixel camera. In 2009, Nokia also unveiled the 7705 Twist, a phone sporting a square shape that swiveled open to reveal a full QWERTY keypad, featuring a 3-megapixel camera, web browsing, voice commands and weighting around 3.44 ounces (98 g).[66]

On 9 August 2012, Nokia launched for the Indian market two new Asha range of handsets equipped with cloud accelerated Nokia browser, helping users browse the Internet faster and lower their spend on data charges.[67]

Symbian[edit]

Symbian was the main operating system of Nokia smartphones by 2012, Nokia 808 PureView, launched in February 2012 was the last Symbian smartphone.

In Q4 2004, Nokia released its first touch screen phone, the Nokia 7710.

In September 2006, Nokia announced the Nokia N95, a Symbian-powered slider smartphone. It was released in February 2007 as the first phone with a 5-megapixel camera. It became hugely popular. An 8 GB variant was released in October 2007.

In November 2007, Nokia announced and released the Nokia N82, its first Nseries phone with Xenon flash. At the Nokia World conference in December 2007, Nokia announced their “Comes With Music” program: Nokia device buyers are to receive a year of complimentary access to music downloads.[68] The service became commercially available in the second half of 2008.

The first Nseries device, the N90, utilised the older Symbian OS 8.1 mobile operating system, as did the N70. Subsequently Nokia switched to using SymbianOS 9 for all later Nseries devices (except the N72, which was based on the N70). Newer Nseries devices incorporate newer revisions of SymbianOS 9 that include Feature Packs. The N800N810N900N9 and N950 are as of April 2012 the only Nseries devices (therefore excluding Lumia devices) to not use Symbian OS. They use the Linux-based Maemo, except the N9(50), which uses MeeGo.[69]

In 2008, Nokia released the Nokia E71 which was marketed to directly compete with the other BlackBerry-type devices offering a full “qwerty” keyboard and cheaper prices.

The Nokia N8, from September 2010, is the first device to function on the Symbian^3 mobile operating system. Nokia revealed that the N8 will be the last device in its flagship N-series devices to ship with Symbian OS.[70][71]

The Nokia 808 PureView has a 41-megapixel camera, more than any other smartphone on the market. It was released in February 2012 and contains a 1.3 GHz processor. On 25 January 2013, Nokia announced this was the last Symbian smartphone the company would make.[72]

  • Nokia 6600 from 2003 with a VGA camera, Bluetoothand expandable memory. It was the first Nokia and Symbian device to sell over a million. (Series 60 2nd)

  • Nokia N73 released in August 2006, with 3G and a front camera. (S60 3rd)

  • The Nokia N95 released in March 2007, with a 5-megapixel camera and sliding multimedia keys. Often considered Nokia’s hero smartphone. (S60 3rd)

  • Nokia E71 with a QWERTYkeyboard, released in July 2008. (S60 3rd)

  • The Nokia 5800 XpressMusic, Nokia’s first full-touch smartphone. (S60 5th)

  • The Nokia N97 released in June 2009 contains a sliding QWERTY and has on-board 32 GB of storage. (S60 5th)

  • The Nokia N8 released in September 2010 is the first Symbian^3 device, and the first to feature a 12-megapixel autofocuslens. (Symbian^3/Anna/Belle)

  • The Nokia 808 PureView, released in February 2012 as the last Symbian smartphone, features a 41-megapixel camera and a 1.3 GHz CPU. (Belle)

Linux devices[edit]

Nokia N9 running MeeGo Harmattan

Alongside Symbian, Nokia had Linux-based devices. The first of which were the Nokia Internet tablets and the Nokia N900, which ran Maemo, a Debian-based version of Linux.[73]

Nokia had stated that Maemo would be developed alongside Symbian.

At the Mobile World Congress in February 2010, it was announced that the Maemo project (from version 6) would be merging with Intel’s Moblin to create MeeGo.[74] One phone, the Nokia N9 was released and consequently the project was abandoned in favour of Windows Phone. Development is now continued under name Sailfish OS.[75][76]

If the Nokia Normandy project running Android is released, it will be the return of Linux-based smartphones at the company.[77]

Series 40 and the Asha Platform[edit]

Nokia Asha 501

Series 40 is a phone platform mainly used in feature phones mainly running Java-based applications. However, in the Asha range of smartphones, it has been marketed as a smartphone OS, despite not actually supporting smartphone features like multitasking or a fully fledged HTML browser.[78]

After Nokia acquired Smarterphone, a company making the Smarterphone OS for low end phones, their platform was combined with Series 40 to form the Asha Platform, which also inherits some UI characteristics from Nokia’sMeeGo platform. The Asha 501 was the first phone running the new OS.[79]

Reorganizations[edit]

Nokia opened its Komárom, Hungary mobile phone factory on 5 May 2000.[80]

In March 2007, Nokia signed a memorandum with Cluj County Council, Romania to open a new plant near the city in Jucu commune.[81][82][83] Moving the production from the Bochum, Germany factory to a low wage country created an uproar in Germany.[84][85] Nokia recently moved its North American Headquarters to Sunnyvale.

In April 2003, the troubles of the networks equipment division caused the corporation to resort to similar streamlining practices on that side, including layoffsand organizational restructuring.[86] This diminished Nokia’s public image in Finland,[87][88] and produced a number of court cases and an episode of a documentary television show critical of Nokia.[89]

On February 2006, Nokia and Sanyo announced a memorandum of understanding to create a joint venture addressing the CDMA handset business. But in June, they announced ending negotiations without agreement. Nokia also stated its decision to pull out of CDMA research and development, to continue CDMA business in selected markets.[90][91][92]

In June 2006, Jorma Ollila left his position as CEO to become the chairman of Royal Dutch Shell[93] and to give way for Olli-Pekka Kallasvuo.[94][95]

In May 2008, Nokia announced on their annual stockholder meeting that they want to shift to the Internet business as a whole. Nokia no longer wants to be seen as the telephone company.GoogleApple and Microsoft are not seen as natural competition for their new image but they are considered as major important players to deal with.[96]

In November 2008, Nokia announced that it was ceasing mobile phone distribution in Japan.[97] Following early December, distribution of Nokia E71 is cancelled, both from NTT DoCoMo andSoftBank Mobile. Nokia Japan retains global research & development programs, sourcing business, and an MVNO venture of Vertu luxury phones, using docomo’s telecommunications network.

In April 2009, Check Point announced that it has completed the acquisition of Nokia’s network security business unit.[98]

In February 2012, Nokia announced that it was laying off 4,000 employees to move manufacturing from Europe and Mexico to Asia.[99]

In March 2012, Nokia announced that it was laying off 1,000 employees from its Salo, Finland factory to focus on software.[100] In June 2012, research facilities in Ulm, Germany and Burnaby, Canada were closed, resulting in the loss of further jobs. The company also announced its plan to lay off 10,000 jobs globally by the end of 2013.[101]

In January 2013, Nokia announced the lay off of about 1,000 employees from its IT, production and logistics divisions. The company planned to transfer the jobs of about 715 employees to subcontractors.[102]

Acquisitions[edit]

For a more comprehensive list, see List of acquisitions by Nokia.

The Nokia E55 from the business segment of the Eseries range

On 22 September 2003, Nokia acquired Sega.com, a branch of Sega which became the major basis to develop the Nokia N-Gage device.[103]

On 16 November 2005, Nokia and Intellisync Corporation, a provider of data and PIM synchronization software, signed a definitive agreement for Nokia to acquire Intellisync.[104] Nokia completed the acquisition on 10 February 2006.[105]

On 19 June 2006, Nokia and Siemens AG announced the companies would merge their mobile and fixed-line phone network equipment businesses to create one of the world’s largest network firms, Nokia Siemens Networks.[106] Each company has a 50% stake in the infrastructure company, and it is headquartered in Espoo, Finland. The companies predicted annual sales of €16 bn and cost savings of €1.5 bn a year by 2010. About 20,000 Nokia employees were transferred to this new company.

On 8 August 2006, Nokia and Loudeye Corp. announced that they had signed an agreement for Nokia to acquire online music distributor Loudeye Corporation for approximately US$60 million.[107] The company has been developing this into an online music service in the hope of using it to generate handset sales. The service, launched on 29 August 2007, is aimed to rival iTunes. Nokia completed the acquisition on 16 October 2006.[108]

In July 2007, Nokia acquired all assets of Twango, the comprehensive media sharing solution for organizing and sharing photos, videos and other personal media.[109][110]

In September 2007, Nokia announced its intention to acquire Enpocket, a supplier of mobile advertising technology and services.[111]

In October 2007, pending shareholder and regulatory approval, Nokia bought Navteq, a U.S.-based supplier of digital mapping data, for a price of $8.1 billion.[5][112] Nokia finalized the acquisition on 10 July 2008.[113]

In September 2008, Nokia acquired OZ Communications, a privately held company with approximately 220 employees headquartered in Montreal, Canada.[114]

On 24 July 2009, Nokia announced that it will acquire certain assets of Cellity, a privately owned mobile software company which employs 14 people in Hamburg, Germany.[115] The acquisition of Cellity was completed on 5 August 2009.[116]

On 11 September 2009, Nokia announced the acquisition of “certain assets of Plum Ventures, Inc, a privately held company which employed approximately 10 people with main offices in Boston, Massachusetts. Plum will complement Nokia’s Social Location services”.[117]

On 28 March 2010, Nokia announced the acquisition of Novarra, the mobile web browser firm from Chicago. Terms of the deal were not disclosed. Novarra is a privately held company based in Chicago, IL and provider of a mobile browser and service platform and has more than 100 employees.[118]

On 10 April 2010, Nokia announced its acquisition of MetaCarta, whose technology was planned to be used in the area of local search, particularly involving location and other services. Financial details of acquisition were not disclosed.[119]

In 2012, Nokia acquired Smarterphone, a developer of an operating system for feature phones, and the imaging company Scalado.[120][121]

Loss of Market Share in Smartphones[edit]

Apple Inc. came into the smartphone market that would later put pressure on Nokia. Although originally launched in 2007, the iPhone continued to be outsold and unfavoured by Nokia smartphones, most notably the Nokia N95.[122] Symbian had a dominating 62.5% market share as of Q4 2007 – ahead of its closest competitors Microsoft‘s Windows Mobile (11.9%) and RIM(10.9%). However, with the launch of the iPhone 3G in 2008, Apple’s market share doubled by the end of that year, YoY, and iPhone OS (now known as iOS) operating system market share pulled ahead of Windows Mobile. Although in Q4 2008, Nokia was still by far the largest smartphone maker with a 40.8% share, it had a decline of over 10% since Q4 2007, mirrored with Apple’s increasing share.[123] The N95’s successor Nokia N96, released in late 2008, proved to be much less favorable, although the Nokia 5800 XpressMusic was mainly considered to be the iPhone 3G’s main rival. Despite the brilliant critical and commercial success of the Nokia E71,[124] it was not enough to stop Nokia’s smartphone market slide. On 24 June 2008, Nokia bought the Symbian operating system and the next year made it open source.[125]

In early 2009, the Nokia N97 was released, a touchscreen device with a landscape QWERTY slider that focused on social networking. It was overall a commercial success despite its mainly mixed reception. The N97’s closest competitor was the iPhone 3GS. 2009 was a successful year for Nokia’s business smartphone market – several key devices were launched such as theNokia E52 which gained positive reception.[126][127] However, Symbian market share dropped from 52.4% in Q4 2008 to 46.1% a year later. RIM increased its share during the same period from 16.6% to 19.9%, but the big winner was once again Apple who increased the share from 8.2% to 14.4%. Android grew too but at 3.9% it was still a minor player.[128]

2010 was a bad year for Nokia and Symbian, and a very successful one for Google. Pressure on Nokia increased dramatically as the Linux-based operating system Android continued to make extraordinary gains.[129] Other Symbian makers including Samsung and Sony Ericsson chose to make Android-powered smartphones instead of Symbian,[130] and by mid-2010 Nokia itself was the only OEM of the operating system outside of Japan. Nokia developed the next generation Symbian platform, Symbian^3, to replace S60. In April 2010, Nokia officially announced the Nokia N8 smartphone, packing a 12 megapixel camera with Xenon flash, a metal aluminium body, and the first device running Symbian^3. It was released in October. Despite several improvements in Symbian^3, it was still not favoured by the public. The Guardian for example, put the N8 review’s headline as Nokia N8 review: like hardware? You’ll love this. Like software? Ah…ZDNet stated that the Symbian’s operating system is not as intuitive as Android and iOS. The Guardian said that despite Symbian’s touchscreen improvements over S60 5th Edition, it was still not a good experience.[131] By Q4 2010, Symbian’s market share dipped to 32%, whereas Android made a major rise to 30%.[132] Despite losing market share, the smartphone unit was profitable and smartphone sales increased in absolute numbers every quarter during the year 2010.[133] It has been estimated that 4 million units of the Nokia N8 have been sold in Q4 2010.[134]

Alliance with Microsoft and Windows Phone[edit]

On 11 February 2011, Nokia’s CEO Stephen Elop, a former head of Microsoft business division, unveiled a new strategic alliance with Microsoft, and announced it would replace Symbian and the MeeGo project with Microsoft’s Windows Phone operating system[135][136] except for non-smartphones. Nokia was also to invest into the Series 40 platform and release a single MeeGo product in 2011, which shipped as the Nokia N9.[137]

As part of the restructuring plan, Nokia planned to reduce spending on research and development, instead customising and enhancing the software line for Windows Phone 7.[138] Nokia’s “applications and content store” (Ovi) becomes integrated into the Windows Phone Store, and Nokia Maps is at the heart of Microsoft’s Bing and AdCenter. Microsoft provides developer tools to Nokia to replace the Qt framework, which is not supported by Windows Phone 7 devices.[139]

Symbian became described by Elop as a “franchise platform” with Nokia planning to sell 150 million Symbian devices after the alliance was set up. MeeGo emphasis was on longer-term exploration, with plans to ship “a MeeGo-related product” later in 2012. Microsoft’s search engine, Bing was to become the search engine for all Nokia phones. Nokia also intended to get some level of customisation on WP7.[140]

After this announcement, Nokia’s share price fell about 14%, its biggest drop since July 2009.[141] Following the replacement of the Symbian system, Nokia’s smartphone sales figures, which had previously increased, collapsed dramatically.[13] From the beginning of 2011 until 2013, Nokia fell from its position as the world’s largest smartphone vendor to assume the status of tenth largest.[14]

As Nokia was the largest mobile phone and smartphone manufacturer worldwide at the time,[142] it was suggested the alliance would make Microsoft’s Windows Phone 7 a stronger contender against Android and iOS.[139] Because previously increasing sales of Symbian smartphones began to fall rapidly in the beginning of 2011, Nokia was overtaken by Apple as the world’s biggest smartphone maker by volume in June 2011.[143] [144] In August 2011 Chris Weber, head of Nokia’s subsidiary in the U.S., stated “The reality is if we are not successful with Windows Phone, it doesn’t matter what we do (elsewhere).” He further added “North America is a priority for Nokia (…) because it is a key market for Microsoft.“.[145]

Nokia reported “well above 1 million” sales for its Lumia line up to 26 January 2012,[146][147] 2 million sales for the first quarter of 2012,[148] and 4 million for the second quarter of 2012.[149] In this quarter, Nokia only sold 600,000 smartphones (Symbian and Windows Phone 7) in North America.[150] For comparison, Nokia sold more than 30 million Symbian devices world-wide still in Q4 2010[151] and the Nokia N8 alone sold almost 4 million in its first quarter of sale. In Q2 2012, 26 million iPhones and 105 million Android phones have been shipped, but only 6.8 million devices with Symbian and 5.4 million with Windows Phone.[152]

While announcing an alliance with Groupon, Elop declared “The competition… is not with other device manufacturers, it’s with Google.”[153]

European carriers have stated that Nokia Windows phones are not good enough to compete with Apple iPhone or Samsung Galaxy phones, that “they are overpriced for what is not an innovative product” and that “No one comes into the store and asks for a Windows phone”.[154]

In June 2012, Nokia chairman Risto Siilasmaa told journalists that Nokia had a back-up plan in the eventuality that Windows Phone failed to be sufficiently successful in the market.[155][156]

Financial difficulties[edit]

Market share of Symbian, Windows Mobile and Windows Phone 7 among US smartphone owners from Q1 2011 to Q2 2012 according to Nielsen Company.

Amid falling sales, Nokia posted a loss of 368 million euros for Q2 2011, while in Q2 2010 had still a profit of 227 million euros. On September 2011, Nokia has announced it will lose another 3,500 jobs worldwide, including the closure of its Cluj factory in Romania.[157]

On 8 February 2012, Nokia Corp. said to cut around 4,000 jobs at smartphone manufacturing plants in Europe by the end of 2012 to move assembly closer to component supplier in Asia. It plans to cut 2,300 of the 4,400 jobs in Hungary, 700 out of 1,000 jobs in Mexico, and 1,000 out of 1,700 factory jobs in Finland.[158]

On 14 June 2012, Nokia announced to cut 10,000 jobs globally by the end of 2013[159] and shut production and research sites in Finland, Germany and Canada in line with continuing losses and the stock price falling to its lowest point since 1996. Today, Nokia’s market value is below $10 billion.[160]

In total, according to actualized and planned laid-offs Nokia will have laid off 24,500 employees by the end of 2013. Nokia has already laid off 7,000 employees in the first stage: 4,000 staff and transferred also 3,000 to services firm Accenture. Nokia also closed its factory in Cluj, Romania that decreased the workforce by 2,000 employees, and restructured the Location & Commerce business unit that decreased the workforce by 1,200 employees. In February 2012, Nokia unveiled a plan to cut 4,000 more jobs at its plants in Finland, Hungary and Mexico as it moves smartphone assembly work to Asia. The most recent plan is to cut further 10,000 jobs globally by the end of 2013.[161] Nokia had 66,267 personnel in its Devices&Services, NAVTEQ and Corporate Common Functions units combined, this has been calculated by subtracting the personnel of Nokia Siemens Networks from the total personnel of Nokia Group based on the full year report of 2010.[162] Therefore, the personnel would decrease by approximately 36 percent by the end of 2013 when compared to the end of 2010 that best depicts the lay-offs that have resulted from the strategy change in February 2011 and competition in the central mobile phone business units recently.

On 18 June 2012, Moody’s downgraded Nokia rating to junk.[163] Nokia CEO admitted on 28 June 2012 that company’s inability to foresee rapid changes in mobile phone industry was one of the major reasons for the problems company was facing.[164]

On 4 May 2012, a group of Nokia investors filed a class action against the company as a result of disappointing sales of Nokia phones running on the Windows Phone platform.[165] On 22 August 2012, it was reported that a group of Finnish Nokia investors were considering gathering signatures for the removal of Elop as CEO.[166]

On 29 October 2012, Nokia said its high-end Lumia 820 and 920 phones, which will run on Microsoft‘s Windows Phone 8 software, will reach first operators and retail outlets in some European markets including France and Britain and later in Russia and Germany as well as other select markets.[167]

On 5 December 2012, Nokia introduced two new smartphones, the Lumia 620 and Lumia 920T. The 620 was released in January 2013.

In January 2013, Nokia reported 6.6 million smartphone sales for Q4 2012 consisting of 2.2 million Symbian and 4.4 million sales of Lumia devices (Windows Phone 7 and 8).[168] In North America, only 700,000 mobile phones have been sold including smartphones.

In May 2013 Nokia released the Asha platform for its low-end borderline smartphone devices. The Verge commented that this may be a recognition on the part of Nokia that they are unable to move Windows Phone into the bottom end of smartphone devices fast enough and may be “hedging their commitment” to the Windows Phone platform.[169]

In December 2012, Nokia announced that it would be selling its headquarters Nokia House for €170 million.[170] In the same month, Nokia announced its partnership with the world’s largest cellular operator China Mobile to offer Nokia’s new Windows-based phone, the Lumia 920, as Lumia 920T, an exclusive Chinese variant. The partnership was a bid by Nokia to connect with China Mobile’s 700 million-person customer base.[171]

Following the second quarter of 2013, Nokia made an operating loss of €115m (£98.8m), with revenues falling 24% to €5.7bn, despite sales figures for the Lumia exceeding those of BlackBerry’s handsets during the same period. Over the nine-quarters prior to the second quarter of 2013, Nokia sustained €4.1 billion worth of operating losses. The company experienced particular problems in both China and the U.S.; in the former, Nokia’s handset revenues are the lowest since 2002, while in the U.S., Francisco Jeronimo, analyst for research company IDC, stated: “Nokia continues to show no signs of recovery in the US market. High investments, high expectations, low results.”[172]

In July 2013, Nokia announced that Lumia sales were 7.4 million for the second quarter of the year – a record high.[173]

Acquisition of mobile phone business by Microsoft[edit]

On 2 September 2013, Microsoft, the producer of the Windows Phone operating system that has powered all of Nokia’s recent smartphone products, announced that it would acquire Nokia’s mobile device business in a deal worth €3.79bn, along with another €1.65bn to license Nokia’s portfolio of patents for 10 years; a deal totaling at over €5.4bn. Steve Ballmer considered the purchase to be a “bold step into the future” for both companies, primarily as a result of its recent collaboration. Following the sale, Nokia will focus on three core business units; its Here mapping service (which Microsoft will license for four years under the deal), its infrastructure division Nokia Solutions and Networks (NSN), and on developing and licensing its “advanced technologies”. Pending regulatory approval, the acquisition is expected to close in early 2014. As part of the deal, a number of Nokia executives will join Microsoft, and Stephen Elop will step down as CEO of Nokia and become the head of Microsoft’s devices team; Risto Siilasmaa will replace Elop as interim CEO.[15][16][174][175]

While Microsoft will license the Nokia brand under a 10-year agreement, Nokia will be unable to use its name on smartphones and will be subject to a non-compete clause preventing it from producing any mobile devices under the Nokia name through 31 December 2015. Microsoft will acquire the rights to the Asha and Lumia brands as part of the deal.[176]

In an interview with Helsingin Sanomat, former Nokia executive Anssi Vanjoki commented that the Microsoft deal was “inevitable” due to the “failed strategy” of Stephen Elop.[177]

In October 2013, Nokia predicted a more profitable future for its NSN networks equipment business, which will become the company’s main business once its former flagship phones division is sold to Microsoft for $7.4 billion in 2014.[178]

Android – The Normandy Project[edit]

Main article: Nokia Normandy

The Nokia Normandy, showing Nokia’s Android UI

A media report revealed in mid-September 2013 that Nokia tested the Android operating system on both its Lumia and Asha hardware. At the time, the future of these projects was unknown.[179] However, a new report on 11 December 2013 showed the Asha-like device, codenamed ‘Normandy‘ for the first time, stating that despite the finalisation of the acquisition, development of the device is continuing.[180] AllThingsD suggested that Microsoft may not actually axe development on the device.[181]

Operations[edit]

In 2011, Nokia had 130,000 employees in 120 countries, sales in more than 150 countries, global annual revenue of over €38 billion, and operating loss of €1 billion.[182] It was the world’s largest manufacturer of mobile phones in 2011, with global device market share of 23% in the second quarter.[142]

The Nokia Research Center, founded in 1986, is Nokia’s industrial research unit consisting of about 500 researchers, engineers and scientists;[183][184] it has sites in seven countries: Finland, China, IndiaKenyaSwitzerland, the United Kingdom and the United States.[185] Besides its research centers, in 2001 Nokia founded (and owns) INdT – Nokia Institute of Technology, a R&D institute located in Brazil.[186] Nokia operates a total of 7 manufacturing facilities[8] located at Manaus, Brazil; Beijing and Dongguan, China; Komárom, Hungary; Chennai, India; Reynosa, Mexico; and Changwon, South Korea.[81][187] Nokia’s industrial design department is headquartered in Soho in London, UK with significant satellite offices in Helsinki, Finland andCalabasas, California in the US.

Nokia is a public limited-liability company listed on the HelsinkiFrankfurt, and New York stock exchanges.[8] Nokia plays a very large role in the economy of Finland.[188][189] It is an important employer in Finland and several small companies have grown into large ones as its partners and subcontractors.[190] In 2009 Nokia contributed 1.6% to Finland’s GDP, and accounted for about 16% of Finland’s exports in 2006.[191]

Divisions[edit]

Since 1 July 2010, Nokia comprises three business groups: Mobile SolutionsMobile Phones and Markets.[192] The three units receive operational support from the Corporate Development Office, led by Kai Öistämö, which is also responsible for exploring corporate strategic and future growth opportunities.[192]

On 1 April 2007, Nokia’s Networks business group was combined with Siemens‘s carrier-related operations for fixed and mobile networks to form Nokia Siemens Networks, jointly owned by Nokia and Siemens and consolidated by Nokia.[193] Nokia bought the 50% share and took full control of the group on 3 July 2013.[194]

Mobile Solutions[edit]

Ambox current red.svg
This article is outdated. Please update this article to reflect recent events or newly available information. (January 2014)

The Nokia N900, a Maemo 5 Linux based mobile Internet device and touchscreensmartphone from Nokia’s Nseries portfolio.

Mobile Solutions is responsible for Nokia’s portfolio of smartphones and mobile computers, including the more expensive multimedia and enterprise-class devices. The team is also responsible for a suite of internet services (formerly under the Ovi brand), with a strong focus on maps and navigation, music, messaging and media.[192] This unit is led by Anssi Vanjoki, along with Tero Ojanperä (for Services) and Alberto Torres (for MeeGoComputers).[192]

Mobile Phones[edit]

The Nokia Lumia 920 using inductive charging

Mobile Phones is responsible for Nokia’s portfolio of affordable mobile phones, as well as a range of services that people can access with them, headed by Mary T. McDowell.[192] This unit provides the general public with mobile voice and data products across a range of devices, including high-volume, consumer oriented mobile phones. The devices are based on GSM/EDGE3G/W-CDMAHSDPA and CDMA cellular technologies.

At the end of the year 2007, Nokia managed to sell almost 440 million mobile phones which accounted for 40% of all global mobile phones sales.[195]In 2011, Nokia’s market share in the mobile phone market had dropped to 27% (417 million phones).[196]

Anssi Vanjoki resigned a few days before Nokia World 2010 and under new leadership team Jo Harlow will look into the affairs of Smartphones portfolio.

On 27 April 2011, The Register reported that Nokia was secretly developing a new operating system called Meltemi aiming at the low-end market. It was believed it would be replacing the S30 and S40 operating systems. Due to low-end market customers’ demand of having smartphone features in their feature phone, the OS would have included some features exclusive to high-end smartphones. On 26 July 2012, it was announced that Nokia had abandoned the Meltemi project as a cost-cutting measure.

Markets[edit]

The flagship Nokia store in São Paulo, Brazil

Markets is responsible for Nokia’s supply chains, sales channels, brand and marketing functions of the company, and is responsible for delivering mobile solutions and mobile phones to the market. The unit is headed by Niklas Savander.[192]

Subsidiaries[edit]

Nokia has numerous subsidiaries.[192] The largest in terms of revenues is Navteq, a Chicago, Illinois-based provider of digital map data and location-based content and services for automotive navigation systems, mobile navigation devices, Internet-based mapping applications, and government and business solutions. Navteq was acquired by Nokia on 1 October 2007.[5] Navteq’s map data is part of the Nokia Maps online service where users can download maps, use voice-guided navigation and other context-aware web services.[192]

Until 2008 Nokia was the major shareholder in Symbian Limited, a software development and licensing company that produced Symbian OS, a smartphone operating system used by Nokia and other manufacturers. In 2008 Nokia acquired Symbian Ltd and, along with a number of other companies, created the Symbian Foundation to distribute the Symbian platform royalty free and as open source.

Nokia Solutions and Networks[edit]

Nokia Solutions and Networks (NSN), previously known as Nokia Siemens Networks B.V. is a multinational data networking and telecommunications equipment company headquartered in Espoo, Finland. NSN was a joint venture between Nokia (50.1%) and Siemens (49.9%), but is now a wholly owned subsidiary of Nokia. It is the world’s fourth-largest telecoms equipment manufacturer measured by 2011 revenues (after EricssonHuawei and Alcatel-Lucent).[197] NSN has operations in around 150 countries.[198]

The creation of NSN was announced on 19 June 2006, when Nokia and Siemens announced that they would merge their mobile and fixed-line phone network equipment businesses.[106] The NSN brand identity was subsequently launched at the 3GSM World Congress in Barcelona in February 2007.[199][200] NSN provides wireless and fixed network infrastructure, communications and networks service platforms, as well as professional services to operators and service providers.[192] NSN focuses on GSMEDGE3G/W-CDMALTE and WiMAX radio access networks; core networks with increasing IP and multiaccess capabilities and services.

In July 2013, an announcement stated that Nokia bought back all shares in Nokia Siemens Networks for a sum of US$2.21 billion.[201]

Android App

Posted: January 16, 2014 in Android App
Tags:

Android is an operating system based on the Linux kernel,[12] and designed primarily for touchscreen mobile devices such as smartphones andtablet computers. Initially developed by Android, Inc., which Google backed financially and later bought in 2005,[13] Android was unveiled in 2007 along with the founding of the Open Handset Alliance: a consortium of hardware, software, and telecommunication companies devoted to advancingopen standards for mobile devices.[14] The first publicly available smartphone running Android, the HTC Dream, was released on October 22, 2008.[15]

The user interface of Android is based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching and reverse pinching to manipulate on-screen objects. Internal hardware such as accelerometersgyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented. Android allows users to customize their home screens with shortcuts to applications and widgets, which allow users to display live content, such as emails and weather information, directly on the home screen. Applications can further send notifications to the user to inform them of relevant information, such as new emails and text messages.

Android is open source and Google releases the source code under the Apache License.[12] This open-source code and permissive licensing allows the software to be freely modified and distributed by device manufacturers, wireless carriers and enthusiast developers. In practice, Android devices ship with a combination of open source and proprietary software.[3] Android has a large community of developers writing applications (“apps“) that extend the functionality of devices, written primarily in the Java programming language.[16] In October 2012, there were approximately 700,000 apps available for Android, and the estimated number of applications downloaded from Google Play, Android’s primary app store, was 25 billion.[17][18] A developer survey conducted in April–May 2013 found that Android is the most popular platform for developers, used by 71% of the mobile developer population.[19]

Android is the world’s most widely used smartphone platform,[20] overtaking Symbian in the fourth quarter of 2010.[21] Android is popular with technology companies who require a ready-made, low-cost, customizable and lightweight operating system for high tech devices.[22] Despite being primarily designed for phones and tablets, it also has been used in televisions, games consolesdigital cameras and other electronics. Android’s open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which add new features for advanced users[23] or bring Android to devices which were officially released running other operating systems.

As of November 2013, Android’s share of the global smartphone market, led by Samsung products, has reached 81%.[24][25][26] The operating system’s success has made it a target for patent litigation as part of the so-called “smartphone wars” between technology companies.[27][28] As of May 2013, 48 billion apps have been installed from the Google Play store,[29] and as of September 2013, 1 billion Android devices have been activated.[30]

Android
Android robot.svg

Android.svg

Android 4.4.2.png

Android 4.4.2 home screen
Company / developer Google
Open Handset Alliance
Android Open Source Project (AOSP)
Programmed in C (core)C++,Java (UI)[1]
OS family Unix-like
Working state Current
Source model Open source withproprietarycomponents[2][3]
Initial release September 23, 2008[4]
Latest stable release 4.4.2 KitKat / December 9, 2013; 38 days ago[5]
Marketing target Smartphones
Tablet computers
Available language(s) Multi-lingual (46 languages)
Package manager Google PlayAPK
Supported platforms 32-bit ARMMIPS,[6]x86[7]
Kernel type Monolithic (modified Linux kernel)
Userland Bionic libc,[8] shellfrom NetBSD,[9] native core utilities with a few from NetBSD[10]
Default user interface Graphical (Multi-touch)
License Apache License 2.0
Linux kernel patches under GNU GPL v2[11]

History

Android, Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin (co-founder of Danger),[31] Rich Miner (co-founder of Wildfire Communications, Inc.),[32] Nick Sears[33] (once VP at T-Mobile), and Chris White (headed design and interface development at WebTV)[13] to develop, in Rubin’s words “smarter mobile devices that are more aware of its owner’s location and preferences”.[13] The early intentions of the company were to develop an advanced operating system for digital cameras, when it was realised that the market for the devices was not large enough, and diverted their efforts to producing a smartphone operating system to rival those of Symbian and Windows Mobile.[34] Despite the past accomplishments of the founders and early employees, Android Inc. operated secretly, revealing only that it was working on software for mobile phones.[13] That same year, Rubin ran out of money. Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope and refused a stake in the company.[35]

Google acquired Android Inc. on August 17, 2005; key employees of Android Inc., including Rubin, Miner and White, stayed at the company after the acquisition.[13] Not much was known about Android Inc. at the time, but many assumed that Google was planning to enter the mobile phone market with this move.[13] At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradable system. Google had lined up a series of hardware component and software partners and signaled to carriers that it was open to various degrees of cooperation on their part.[36][37][38]

Speculation about Google’s intention to enter the mobile communications market continued to build through December 2006.[39] The unveiling of the iPhone, a touchscreen-based phone byApple, on January 9, 2007 had a disruptive effect on the development of Android. At the time, a prototype device codenamed “Sooner” had a closer resemblance to a BlackBerry phone, with no touchscreen, and a physical, QWERTY keyboard. Work immediately began on re-engineering the OS and its prototypes to combine traits of their own designs with an overall experience designed to compete with the iPhone.[40] In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony.[41][42]

Eric SchmidtAndy Rubin and Hugo Barra at a press conference for the Google’s Nexus 7 tablet.

On November 5, 2007, the Open Handset Alliance, a consortium of technology companies including Google, device manufacturers such as HTC,Sony and Samsung, wireless carriers such as Sprint Nextel and T-Mobile, and chipset makers such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop open standards for mobile devices.[14] That day, Android was unveiled as its first product, a mobile deviceplatform built on the Linux kernel version 2.6.[14] The first commercially available smartphone running Android was the HTC Dream, released on October 22, 2008.[15]

In 2010, Google launched its Nexus series of devices — a line of smartphones and tablets running the Android operating system, and built by a manufacturer partner. HTC collaborated with Google to release the first Nexus smartphone,[43] the Nexus One. The series has since been updated with newer devices, such as the Nexus 4 phone and Nexus 10 tablet, made by LG and Samsung respectively. Google releases the Nexus phones and tablets to act as their flagship Android devices, demonstrating Android’s latest software and hardware features. On March 13, 2013, it was announced by Larry Page in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google.[44]He was replaced by Sundar Pichai, who also continues his role as the head of Google’s Chrome division,[45] which develops Chrome OS.

Since 2008, Android has seen numerous updates which have incrementally improved the operating system, adding new features and fixing bugs in previous releases. Each major release is named in alphabetical order after a dessert or sugary treat; for example, version 1.5 Cupcake was followed by 1.6 Donut. The latest released version is 4.4.2 KitKat, which was released on 9 December 2013.[5][46][47]

Features

Interface

Android green figure, next to its original packaging

Android’s user interface is based on direct manipulation,[48] using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching and reverse pinching to manipulate on-screen objects.[48] The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware such as accelerometers,gyroscopes and proximity sensors[49] are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel.[50]

Android devices boot to the homescreen, the primary navigation and information point on the device, which is similar to the desktop found on PCs. Android homescreens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content such as the weather forecast, the user’s email inbox, or a news ticker directly on the homescreen.[51] A homescreen may be made up of several pages that the user can swipe back and forth between, though Android’s homescreen interface is heavily customisable, allowing the user to adjust the look and feel of the device to their tastes.[52] Third-party apps available on Google Play and other app stores can extensively re-theme the homescreen, and even mimic the look of other operating systems, such as Windows Phone.[53] Most manufacturers, and some wireless carriers, customise the look and feel of their Android devices to differentiate themselves from their competitors.[54]

A screenshot of the notification area of Android 4.4.2, showing a notification being dismissed by sliding it away.

Present along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be “pulled” down to reveal a notification screen where apps display important information or updates, such as a newly received email or SMS text, in a way that does not immediately interrupt or inconvenience the user.[55] In early versions of Android these notifications could be tapped to open the relevant app, but recent updates have provided enhanced functionality, such as the ability to call a number back directly from the missed call notification without having to open the dialer app first.[56] Notifications are persistent until read or dismissed by the user.

Applications

Android has a growing selection of third party applications, which can be acquired by users either through an app store such as Google Play or theAmazon Appstore, or by downloading and installing the application’s APK file from a third-party site.[57] The Play Store application allows users to browse, download and update apps published by Google and third-party developers, and is pre-installed on devices that comply with Google’s compatibility requirements.[58] The app filters the list of available applications to those that are compatible with the user’s device, and developers may restrict their applications to particular carriers or countries for business reasons.[59] Purchases of unwanted applications can be refunded within 15 minutes of the time of download,[60] and some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user’s monthly bill.[61]

As of July 2013, there are more than one million applications available for Android in the Play Store.[62] As of May 2013, 48 billion apps have been installed from the Google Play store.[29]

Applications are developed in the Java language using the Android software development kit (SDK). The SDK includes a comprehensive set of development tools,[63] including a debuggersoftware libraries, a handset emulator based on QEMU, documentation, sample code, and tutorials. The officially supported integrated development environment (IDE) is Eclipse using the Android Development Tools (ADT) plugin. Other development tools are available, including a Native Development Kit for applications or extensions in C or C++, Google App Inventor, a visual environment for novice programmers, and various cross platform mobile web applications frameworks.

In order to work around limitations on reaching Google services due to Internet censorship in the People’s Republic of China, Android devices sold in the PRC are generally customized to use state approved services instead.[64]

Memory management

Since Android devices are usually battery-powered, Android is designed to manage memory (RAM) to keep power consumption at a minimum, in contrast to desktop operating systems which generally assume they are connected to unlimited mains electricity. When an Android app is no longer in use, the system will automatically suspend it in memory – while the app is still technically “open”, suspended apps consume no resources (e.g. battery power or processing power) and sit idly in the background until needed again. This has the dual benefit of increasing the general responsiveness of Android devices, since apps don’t need to be closed and reopened from scratch each time, but also ensuring background apps don’t consume power needlessly.[65]

Android manages the apps stored in memory automatically: when memory is low, the system will begin killing apps and processes that have been inactive for a while, in reverse order since they were last used (i.e. oldest first). This process is designed to be invisible to the user, such that users do not need to manage memory or the killing of apps themselves.[66] However, confusion over Android memory management has resulted in third-party task killers becoming popular on the Google Play store; these third-party task killers are generally regarded as doing more harm than good.[67]

Hardware

Tronsmart MK908, a Rockchip-based quad-core Android “mini PC”, with a microSD card next to it for a size comparison.

The main hardware platform for Android is the 32-bit ARMv7 architecture. There is support for x86 from the Android-x86 project,[7] and Google TV uses a special x86 version of Android. In 2013, Freescale announced Android on its i.MX processor, i.MX5X and i.MX6X series.[68] In 2012 Intel processors began to appear on more mainstream Android platforms, such as phones.[69]

As of November 2013, current versions of Android require at least 512 MB of RAM,[70] and a 32-bit ARMv7MIPS or x86 architecture processor,[7]together with an OpenGL ES 2.0 compatible graphics processing unit (GPU).[71] Android supports OpenGL ES 1.1, 2.0 and 3.0. Some applications explicitly require certain version of the OpenGL ES, thus suitable GPU hardware is required to run such applications.[71]

Android devices incorporate many optional hardware components, including still or video cameras, GPS, hardware orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers, magnetometers, proximity sensors, pressure sensors, thermometers and touchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional.[59] Android used to require an autofocus camera, which was relaxed to a fixed-focus camera[59] if it is even present at all, since the camera was dropped as a requirement entirely when Android started to be used on set-top boxes.

Development

Android is developed in private by Google until the latest changes and updates are ready to be released, at which point the source code is made available publicly.[72] This source code will only run without modification on select devices, usually the Nexus series of devices. The source code is, in turn, adapted by OEMs to run on their hardware.[73] Android’s source code does not contain the often proprietary device drivers that are needed for certain hardware components.[74]

The green Android logo was designed for Google in 2007 by graphic designer Irina Blok. The design team was tasked with a project to create a universally identifiable icon with the specific inclusion of a robot in the final design. After numerous design developments based on science-fiction and space movies, the team eventually sought inspiration from the human symbol on restroom doors and modified the figure into a robot shape. As Android is open-sourced, it was agreed that the logo should be likewise, and since its launch the green logo has been reinterpreted into countless variations on the original design.[75]

Update schedule

From left to right: HTC Dream (G1),Nexus OneNexus SGalaxy Nexus

Google provides major updates, incremental in nature, to Android every six to nine months, which most devices are capable of receiving over the air.[76]The latest major update is Android 4.4 KitKat.[5]

Nexus 5, most recent smartphone in the Nexus range

Compared to its chief rival mobile operating system, namely iOS, Android updates are typically slow to reach actual devices. For devices not under theNexus brand, updates often arrive months from the time the given version is officially released.[77] This is caused partly due to the extensive variation inhardware of Android devices, to which each update must be specifically tailored, as the official Google source code only runs on their flagship Nexusdevices. Porting Android to specific hardware is a time- and resource-consuming process for device manufacturers, who prioritize their newest devices and often leave older ones behind.[77] Hence, older smartphones are frequently not updated if the manufacturer decides it is not worth their time, regardless of whether the phone is capable of running the update. This problem is compounded when manufacturers customize Android with their own interface and apps, which must be reapplied to each new release. Additional delays can be introduced by wireless carriers who, after receiving updates from manufacturers, further customize and brand Android to their needs and conduct extensive testing on their networks before sending the update out to users.[77]

The lack of after-sale support from manufacturers and carriers has been widely criticized by consumer groups and the technology media.[78][79] Some commentators have noted that the industry has a financial incentive not to update their devices, as the lack of updates for existing devices fuels the purchase of newer ones,[80] an attitude described as “insulting”.[79] The Guardian has complained that the complicated method of distribution for updates is only complicated because manufacturers and carriers have designed it that way.[79] In 2011, Google partnered with a number of industry players to announce an “Android Update Alliance”, pledging to deliver timely updates for every device for 18 months after its release;[81] however, this alliance has never been mentioned since.[77]

In 2012, Google began decoupling certain aspects of the operating system (particularly core applications) so they could be updated through Google Play Store, independently of Android itself. One of these components, Google Play Services, is a system-level process providing APIs for Google services, installed automatically on nearly all devices running Android version 2.2 and higher. With these changes, Google can add new operating system functionality through Play Services and application updates without having to distribute an update to the operating system itself. As a result, Android 4.2 and 4.3 contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements.[3][82]

Linux kernel

As of November 2013, current Android versions consist of a kernel based on the Linux kernel version 3.4.10,[83][84] while Android versions older than 4.0 Ice Cream Sandwich were based on the Linux kernel 2.6.x.[85][86]

Android’s Linux kernel has further architecture changes by Google outside the typical Linux kernel development cycle.[87] Certain features that Google contributed back to the Linux kernel, notably a power management feature called “wakelocks”, were rejected by mainline kernel developers partly because they felt that Google did not show any intent to maintain its own code.[88][89][90] Google announced in April 2010 that they would hire two employees to work with the Linux kernel community,[91] but Greg Kroah-Hartman, the current Linux kernel maintainer for the stable branch, said in December 2010 that he was concerned that Google was no longer trying to get their code changes included in mainstream Linux.[89] Some Google Android developers hinted that “the Android team was getting fed up with the process,” because they were a small team and had more urgent work to do on Android.[92]

In August 2011, Linus Torvalds said that “eventually Android and Linux would come back to a common kernel, but it will probably not be for four to five years”.[93] In December 2011, Greg Kroah-Hartman announced the start of the Android Mainlining Project, which aims to put some Android drivers, patches and features back into the Linux kernel, starting in Linux 3.3.[94] Linux included the autosleep and wakelocks capabilities in the 3.5 kernel, after many previous attempts at merger. The interfaces are the same but the upstream Linux implementation allows for two different suspend modes: to memory (the traditional suspend that Android uses), and to disk (hibernate, as it is known on the desktop).[95] Google maintains a public code repository that contains their experimental work to re-base Android off the latest stable Linux versions.[96][97]

The flash storage on Android devices is split into several partitions, such as /system for the operating system itself, and /data for user data and application installations.[98] In contrast to desktop Linux distributions, Android device owners are not given root access to the operating system and sensitive partitions such as /system are read-only. However, root access can be obtained by exploiting security flaws in Android, which is used frequently by the open-source community to enhance the capabilities of their devices,[99] but also by malicious parties to installviruses and malware.[100]

Android is a Linux distribution according to the Linux Foundation[101] and Google’s open-source chief, Chris DiBona.[102] Others, such as Google engineer Patrick Brady, disagree that it is a Linux distribution, noting the lack of support for many GNU tools in Android, including glibc.[103]

Software stack

Android’s architecture diagram

[icon] This section requires expansion.(December 2013)

On top of the Linux kernel, there are the middlewarelibraries and APIs written in C, and application software running on anapplication framework which includes Java-compatible libraries based on Apache Harmony. Android uses the Dalvik virtual machine with just-in-time compilation to run Dalvik “dex-code” (Dalvik Executable), which is usually translated from the Java bytecode.[104] Android 4.4 also supports new experimental runtime virtual machine, ART, which is not enabled by default.[105]

Android uses Bionic in place of a standard C library, originally developed by Google specifically for Android, as a derivation of the BSD‘s standard C library code. Bionic has several major features specific to the Linux kernel, and its development continues independently of other Android’s source code bases. The main benefits of using Bionic instead of the GNU C Library (glibc) oruClibc are its different licensing model, smaller runtime footprint, and optimization for low-frequency CPUs.[106][107]

Android does not have a native X Window System by default nor does it support the full set of standard GNU libraries, and this makes it difficult to port existing Linux applications or libraries to Android.[108] Support for simple C and SDL applications is possible by injection of a small Java shim and usage of the JNI[109] like, for example, in the Jagged Alliance 2 port for Android.[110]

Open-source community

Android has an active community of developers and enthusiasts who use the Android Open Source Project (AOSP) source code to develop and distribute their own modified versions of the operating system.[111] These community-developed releases often bring new features and updates to devices faster than through the official manufacturer/carrier channels, albeit without as extensive testing or quality assurance;[23] provide continued support for older devices that no longer receive official updates; or bring Android to devices that were officially released running other operating systems, such as the HP TouchPad. Community releases often come pre-rooted and contain modifications unsuitable for non-technical users, such as the ability to overclock or over/undervolt the device’s processor.[112] CyanogenMod is the most widely used community firmware,[113] and acts as a foundation for numerous others.

Historically, device manufacturers and mobile carriers have typically been unsupportive of third-party firmware development. Manufacturers express concern about improper functioning of devices running unofficial software and the support costs resulting from this.[114] Moreover, modified firmwares such as CyanogenMod sometimes offer features, such as tethering, for which carriers would otherwise charge a premium. As a result, technical obstacles including locked bootloaders and restricted access to root permissions are common in many devices. However, as community-developed software has grown more popular, and following a statement by the Librarian of Congress in the United States that permits the “jailbreaking” of mobile devices,[115]manufacturers and carriers have softened their position regarding third party development, with some, including HTC,[114] Motorola,[116] Samsung[117][118] and Sony,[119] providing support and encouraging development. As a result of this, over time the need to circumvent hardware restrictions to install unofficial firmware has lessened as an increasing number of devices are shipped with unlocked or unlockable bootloaders, similar to the Nexus series of phones, although usually requiring that users waive their devices’ warranties to do so.[114] However, despite manufacturer acceptance, some carriers in the US still require that phones are locked down.[120]

The unlocking and “hackability” of smartphones and tablets remains a source of tension between the community and industry, with the community arguing that unofficial development is increasingly important given the failure of industry to provide timely updates and/or continued support to their devices.[120]

Security and privacy

See also: Mobile security

Permissions are used to control a particular application’s access to system functions.

Android applications run in a sandbox, an isolated area of the system that does not have access to the rest of the system’s resources, unless access permissions are explicitly granted by the user when the application is installed. Before installing an application, the Play Store displays all required permissions: a game may need to enable vibration or save data to an SD card, for example, but should not need to read SMS messages or access the phonebook. After reviewing these permissions, the user can choose to accept or refuse them, installing the application only if they accept.[121] The sandboxing and permissions system lessens the impact of vulnerabilities and bugs in applications, but developer confusion and limited documentation has resulted in applications routinely requesting unnecessary permissions, reducing its effectiveness.[122] Several security firms, such as Lookout Mobile Security,[123] AVG Technologies,[124] and McAfee,[125] have released antivirus software for Android devices. This software is ineffective as sandboxing also applies to such applications, limiting their ability to scan the deeper system for threats.[126]

Research from security company Trend Micro lists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user.[127] Other malware displays unwanted and intrusive adverts on the device, or sends personal information to unauthorised third parties.[127] Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is being exaggerated by security companies for commercial reasons,[128][129] and have accused the security industry of playing on fears to sell virus protection software to users.[128] Google maintains that dangerous malware is actually extremely rare,[129] and a survey conducted by F-Secure showed that only 0.5% of Android malware reported had come from the Google Play store.[130]

Google currently uses their Google Bouncer malware scanner to watch over and scan the Google Play store apps.[131] It is intended to flag up suspicious apps and warn users of any potential issues with an application before they download it.[132] Android version 4.2 Jelly Bean was released in 2012 with enhanced security features, including a malware scanner built into the system, which works in combination with Google Play but can scan apps installed from third party sources as well, and an alert system which notifies the user when an app tries to send a premium-rate text message, blocking the message unless the user explicitly authorises it.[133]

Android smartphones have the ability to report the location of Wi-Fi access points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps like FoursquareGoogle LatitudeFacebook Places, and to deliver location-based ads.[134] Third party monitoring software such as TaintDroid,[135] an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers.[136] In August 2013, Google released the Android Device Manager (ADM), a component that allows users to remotely track, locate, and wipe their Android device through a web interface.[82][137] In December 2013, Google released ADM as an Android application on the Google Play store, where it is available to devices running Android version 2.2 and higher.[138][139]

The open-source nature of Android allows security contractors to take existing devices and adapt them for highly secure uses. For example Samsung has worked with General Dynamics through their Open Kernel Labs acquisition to rebuild Jelly Bean on top of their hardened microvisor for the “Knox” project.[140][141]

As part of the broader 2013 mass surveillance disclosures it was revealed in September 2013 that the American and British intelligence agencies, the NSA and Government Communications Headquarters (GCHQ) respectively, have access to the user data on iPhone, BlackBerry, and Android devices. They are able to read almost all smartphone information, including SMS, location, emails, and notes.[142]

Licensing

The source code for Android is available under free and open-source software licenses. Google publishes most of the code (including network and telephony stacks)[143] under the Apache License version 2.0,[144][145] and the rest, Linux kernel changes, under the GNU General Public License version 2. The Open Handset Alliance develops the changes to the Linux kernel, in public, with source code publicly available at all times. The rest of Android is developed in private by Google, with source code released publicly when a new version is released. Typically Google collaborates with a hardware manufacturer to produce a “flagship” device (part of the Nexus series) featuring the new version of Android, then makes the source code available after that device has been released.[146] The only Android release which was not immediately made available as source code was the tablet-only 3.0 Honeycomb release. The reason, according to Andy Rubin in an official Android blog post, was because Honeycomb was rushed for production of the Motorola Xoom,[147] and they did not want third parties creating a “really bad user experience” by attempting to put onto smartphones a version of Android intended for tablets.[148]

While much of Android itself is open source software, most Android devices ship with a large amount of proprietary software. Google licenses a suite of proprietary apps for Android, such as Play Store, Google Search, and Google Play Services—a software layer which provides APIs that integrate with Google-provided services, among others.[82] These apps, along with the Android trademarks, can only be licensed by hardware manufacturers for devices that meet Google’s compatibility standards; as such, forks of Android that make major changes to the OS itself, such asAmazon‘s Fire OS and Alibaba Group‘s Aliyun OS, do not include any of Google’s non-free components, and are incompatible with apps that require them. Custom, certified distributions of Android produced by manufacturers (such as TouchWiz and HTC Sense) may also replace certain stock Android apps with their own proprietary variants and add additional software not included in the stock Android operating system.[3][149] With many devices, there are binary blobs that must be provided by the manufacturer in order for Android to work.[150] Also, device manufacturers cannot use Google’s Android trademark unless Google certifies that the device complies with their Compatibility Definition Document (CDD).[149]

Several stock apps in Android’s open source code used by previous versions (such as Search, Music, and Calendar) have also been effectively deprecated by Google, with development having shifted to newer but proprietary versions distributed and updated through Play Store, such as Google Search and Google Play Music. While these older apps remain in Android’s source code, they have no longer received any major updates. Additionally, proprietary variants of the stock Camera and Gallery apps also include certain functions (such as Photosphere panoramas and Google+ album integration) that are excluded from the open source versions (however, they have yet to be completely abandoned), and the home screen itself on the Nexus 5 is replaced by one implemented as a component of the proprietary Google Search app. Although an update for Google Search containing the relevant components was released through Google Play for all Android devices, the new home screen was not enabled by the Android 4.4 updates for any other Nexus devices, which still use the previous AOSP home screen.[3][151][152]

Richard Stallman and the Free Software Foundation have been critical of Android and have recommended the usage of alternatives such as Replicant, because drivers and firmware vital for the proper functioning of Android devices are usually proprietary, and because Google Play allows non-free software.[153][154]

Reception

Android-x86 running on an ASUS EeePCnetbook; Android has been unofficially ported to generic computers for use as a desktop operating system.

Android received a lukewarm reaction when it was unveiled in 2007. Although analysts were impressed with the respected technology companies that had partnered with Google to form the Open Handset Alliance, it was unclear whether mobile phone manufacturers would be willing to replace their existing operating systems with Android.[155] The idea of an open-source, Linux-based development platform sparked interest,[156] but there were additional worries about Android facing strong competition from established players in the smartphone market, such as Nokia and Microsoft, and rival Linux mobile operating systems that were in development.[157] These established players were skeptical: Nokia was quoted as saying “we don’t see this as a threat,”[158] and a member of Microsoft‘s Windows Mobile team stated “I don’t understand the impact that they are going to have.”[158]

Since then Android has grown to become the most widely used smartphone operating system[22] and “one of the fastest mobile experiences available.”[159] Reviewers have highlighted the open-source nature of the operating system as one of its defining strengths, allowing companies such as Amazon (Kindle Fire), Barnes & Noble (Nook), OuyaBaidu and others to fork the software and release hardware running their own customised version of Android. As a result, it has been described by technology website Ars Technica as “practically the default operating system for launching new hardware” for companies without their own mobile platforms.[22] This openness and flexibility is also present at the level of the end user: Android allows extensive customisation of devices by their owners and apps are freely available from non-Google app stores and third party websites. These have been cited as among the main advantages of Android phones over others.[22][160]

Despite Android’s popularity, including an activation rate three times that of iOS, there have been reports that Google has not been able to leverage their other products and web services successfully to turn Android into the money maker that analysts had expected.[161] The Verge suggested that Google is losing control of Android due to the extensive customization and proliferation of non-Google apps and services—Amazon’s Kindle Fire line uses Fire OS, a heavily modified fork of Android which does not include or support any of Google’s proprietary components, and requires that users obtain software from its competing Amazon Appstore instead of Play Store.[3] Google SVP Andy Rubin, who was replaced as head of the Android division in March 2013, has been blamed for failing to establish a lucrative partnership with cell phone makers. The chief beneficiary of Android has been Samsung, whose Galaxybrand has surpassed that of Android in terms of brand recognition since 2011.[162][163] Meanwhile other Android manufacturers have struggled since 2011, such as LG, HTC, and Google’s ownMotorola Mobility (whose partnership with Verizon Wireless to push the “DROID” brand has faded since 2010). Ironically, while Google directly earns nothing from the sale of each Android device, Microsoft and Apple have successfully sued to extract patent royalty payments from Android handset manufacturers.

Android has suffered from “fragmentation”,[164] a situation where the variety of Android devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently across the ecosystem harder than rival platforms such as iOS where hardware and software varies less. For example, according to data from OpenSignal in July 2013, there were 11,868 models of Android device, numerous different screen sizes and eight Android OS versions simultaneously in use, while the large majority of iOS users have upgraded to the latest iteration of that OS.[165] Critics such as Apple Insider have asserted that fragmentation via hardware and software pushed Android’s growth through large volumes of low end, budget-priced devices running older versions of Android. They maintain this forces Android developers to write for the “lowest common denominator” to reach as many users as possible, who have too little incentive to make use of the latest hardware or software features only available on a smaller percentage of devices.[166] However, OpenSignal, who develops both Android and iOS apps, concluded that although fragmentation can make development trickier, Android’s wider global reach also increases the potential reward.[165]

Tablets

Despite its success on smartphones, initially Android tablet adoption was slow.[167] One of the main causes was the chicken or the egg situation where consumers were hesitant to buy an Android tablet due to a lack of high quality tablet apps, but developers were hesitant to spend time and resources developing tablet apps until there was a significant market for them.[168][169] The content and app “ecosystem” proved more important than hardware specs as the selling point for tablets. Due to the lack of Android tablet-specific apps in 2011, early Android tablets had to make do with existing smartphone apps that were ill-suited to larger screen sizes, whereas the dominance of Apple’s iPad was reinforced by the large number of tablet-specific iOS apps.[169][170]

Despite app support in its infancy, a considerable number of Android tablets (alongside those using other operating systems, such as the HP TouchPad and BlackBerry PlayBook) were rushed out to market in an attempt to capitalize on the success of the iPad.[169] InfoWorld has suggested that some Android manufacturers initially treated their first tablets as a “Frankenphone business”, a short-term low-investment opportunity by placing a smartphone-optimized Android OS (before Android 3.0 Honeycomb for tablets was available) on a device while neglecting user interface. This approach, such as with the Dell Streak, failed to gain market traction with consumers as well as damaging the early reputation of Android tablets.[171][172] Furthermore, several Android tablets such as the Motorola Xoom were priced the same or higher than the iPad, which hurt sales. An exception was the Amazon Kindle Fire, which relied upon lower pricing as well as access to Amazon’s ecosystem of apps and content.[169][173]

This began to change in 2012 with the release of the affordable Nexus 7 and a push by Google for developers to write better tablet apps.[174] Android tablet market share surpassed the iPad’s in Q3 2012.[175]

Market share

Research company Canalys estimated in the second quarter of 2009 that Android had a 2.8% share of worldwide smartphone shipments.[176] By the fourth quarter of 2010 this had grown to 33% of the market, becoming the top-selling smartphone platform.[20] By the third quarter of 2011 Gartnerestimated that more than half (52.5%) of the smartphone market belongs to Android.[177] By the third quarter of 2012 Android had a 75% share of the global smartphone market according to the research firm IDC.[178]

In July 2011, Google said that 550,000 new Android devices were being activated every day,[179] up from 400,000 per day in May,[180] and more than 100 million devices had been activated[181]with 4.4% growth per week.[179] In September 2012, 500 million devices had been activated with 1.3 million activations per day.[182][183] In May 2013, at Google I/O, Sundar Pichai announced that 900 million Android devices had been activated.[184]

Android market share varies by location. In July 2012, Android’s market share in the United States was 52%,[185] and rose to 90% in China.[186] During the third quarter of 2012, Android’s worldwide smartphone market share was 75%,[178] with 750 million devices activated in total and 1.5 million activations per day.[183]

As of March 2013, Android’s share of the global smartphone market, led by Samsung products, was 64%. The Kantar market research company reported that Google’s platform accounted for over 70% of all smartphone device sales in China during this period and that Samsung’s loyalty rate in Britain (59%) is second to that of Apple (79%).[26]

As of November 2013, Android’s share of the smartphone market is said to have reached 80%. Indeed, during August, September, and October 2013, no less than 261.1 million smartphones were sold overall, with about 211 million smartphones running Google’s operating system.[25]

Platform usage

Breakdown of the Android versions usage

These charts provide data about the relative number of devices accessing the Play Store recently and running a given version of the Android platform, as of 11 January 2014.[187]

Version Code name Release date API level Distribution
4.4 KitKat October 31, 2013 19 1.4%
4.3.x Jelly Bean July 24, 2013 18 7.8%
4.2.x November 13, 2012 17 15.4%
4.1.x July 9, 2012 16 35.9%
4.0.3–4.0.4 Ice Cream Sandwich December 16, 2011 15 16.9%
3.2 Honeycomb July 15, 2011 13 0.1%
2.3.3–2.3.7 Gingerbread February 9, 2011 10 21.2%
2.2 Froyo May 20, 2010 8 1.3%

Application piracy

There has been some concern about the ease with which paid Android apps can be pirated.[188] In a May 2012 interview with Eurogamer, the developers of Football Manager stated that the ratio of pirated players vs legitimate players was 9:1 for their game Football Manager Handheld.[189] However, not every developer agreed that piracy rates were an issue; for example, in July 2012 the developers of the game Wind-up Knight said that piracy levels of their game were only 12%, and most of the piracy came from China, where people cannot purchase apps from Google Play.[190]

In 2010, Google released a tool for validating authorized purchases for use within apps, but developers complained that this was insufficient and trivial to crack. Google responded that the tool, especially its initial release, was intended as a sample framework for developers to modify and build upon depending on their needs, not as a finished piracy solution.[191] In 2012 Google released a feature in Android 4.1 that encrypted paid applications so that they would only work on the device on which they were purchased, but this feature has been temporarily deactivated due to technical issues.[192]

Legal issues

Further information: Oracle v. GoogleSmartphone wars, and Patent troll

Both Android and Android phone manufacturers have been involved in numerous patent lawsuits. On August 12, 2010, Oracle sued Google over claimed infringement of copyrights and patents related to the Java programming language.[193] Oracle originally sought damages up to $6.1 billion,[194] but this valuation was rejected by a United States federal judge who asked Oracle to revise the estimate.[195] In response, Google submitted multiple lines of defense, counterclaiming that Android did not infringe on Oracle’s patents or copyright, that Oracle’s patents were invalid, and several other defenses. They said that Android is based on Apache Harmony, a clean room implementation of the Java class libraries, and an independently developed virtual machine calledDalvik.[196] In May 2012, the jury in this case found that Google did not infringe on Oracle’s patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable.[197][198]

In addition to lawsuits against Google directly, various proxy wars have been waged against Android indirectly by targeting manufacturers of Android devices, with the effect of discouraging manufacturers from adopting the platform by increasing the costs of bringing an Android device to market.[199] Both Apple and Microsoft have sued several manufacturers for patent infringement, with Apple’s ongoing legal action against Samsung being a particularly high-profile case. In October 2011, Microsoft said they had signed patent license agreements with ten Android device manufacturers, whose products account for 55% of the worldwide revenue for Android devices.[200] These include Samsung and HTC.[201] Samsung’s patent settlement with Microsoft includes an agreement that Samsung will allocate more resources to developing and marketing phones running Microsoft’s Windows Phone operating system.[199]

Google has publicly expressed its frustration for the current patent landscape in the United States, accusing Apple, Oracle and Microsoft of trying to take down Android through patent litigation, rather than innovating and competing with better products and services.[202] In 2011–12, Google purchased Motorola Mobility for US$12.5 billion, which was viewed in part as a defensive measure to protect Android, since Motorola Mobility held more than 17,000 patents.[203] In December 2011, Google bought over a thousand patents from IBM.[204]

In 2013, Fairsearch, a lobbying organization supported by Microsoft, Oracle and others, filed a complaint regarding Android with the European Commission, alleging that its free of charge distribution model constituted anti-competitive predatory pricing. The Free Software Foundation Europe, whose donors include Google, disputed the Fairsearch allegations.[205]

Usage on other devices

Ouya, a video game console which runs Android, was one of the most successfulcrowdfunding campaigns on the websiteKickstarter.

The open and customizable nature of Android allows it to be used on other electronics aside from smartphones and tablets, including laptops andnetbookssmartbooks,[206] smart TVs (Google TV) and cameras (Nikon Coolpix S800c and Galaxy Camera).[207][208] In addition, the Android operating system has seen applications on smart glasses (Google Glass), smartwatches,[209] headphones,[210] car CD and DVD players,[211]mirrors,[212] portable media players,[213] landline[214] and Voice over IP phones.[215] Ouya, a video game console running Android, became one of the most successful Kickstarter campaigns, crowdfunding US$8.5m for its development,[216][217] and was later followed by other Android-based consoles, such as Nvidia‘s Project Shield — an Android device in a video game controller form factor.[218]

In 2011, Google demonstrated “Android@Home”, a home automation technology which uses Android to control a range of household devices including light switches, power sockets and thermostats.[219] Prototype light bulbs were announced that could be controlled from an Android phone or tablet, but Android head Andy Rubin was cautious to note that “turning a lightbulb on and off is nothing new”, pointing to numerous failed home automation services. Google, he said, was thinking more ambitiously and the intention was to use their position as a cloud services provider to bring Google products into customers’ homes.[220][221]

In August 2011, Parrot launched the first car stereo system powered by the Android platform, known as Asteroid and featuring voice commands.[222][223] In September 2013, Clarion released more advanced car stereo systems powered by the Android platform, known as AX1 and Mirage, running Android 2.3.7 and 2.2 (Gingerbread), respectively, and featuring GPS-based navigation, 6.5-inch screen and various options for wireless data access.[224][225]

See also

website

Posted: January 16, 2014 in web
Tags:

website, also written as Web site,[1] web site, or simply site,[2] is a set of related web pages served from a single web domain. A website is hosted on at least one web server, accessible via a network such as the Internet or a private local area network through an Internet address known as a Uniform resource locator. All publicly accessible websites collectively constitute the World Wide Web.

A webpage is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTMLXHTML). A webpage may incorporate elements from other websites with suitable markup anchors.

Webpages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user of the webpage content. The user’s application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.

The pages of a website can usually be accessed from a simple Uniform Resource Locator (URL) called the web address. The URLs of the pages organize them into a hierarchy, although hyperlinking between them conveys the reader’s perceived site structure and guides the reader’s navigation of the site which generally includes a home page with most of the links to the site’s web content, and a supplementary about, contactand link page.

Some websites require a subscription to access some or all of their content. Examples of subscription websites include many business sites, parts of news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based emailsocial networkingwebsites, websites providing real-time stock market data, and websites providing various other services (e.g., websites offering storing and/or sharing of images, files and so forth).

History

The World Wide Web (WWW) was created in 1990 by CERN physicist Tim Berners-Lee.[3] On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone.[4]

Before the introduction of HTML and HTTP, other protocols such as File Transfer Protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting, or were encoded in word processor formats.

Overview

Websites have many functions and can be used in various fashions; a website can be a personal website, a commercial website, a government website or a nonprofit organization website. Websites can be the work of an individual, a business or other organization, and are typically dedicated to a particular topic or purpose. Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, can be blurred.

Websites are written in, or dynamically converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as a user agent. Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones.

A website is hosted on a computer system known as a web server, also called an HTTP server. These terms can also refer to the software that runs on these systems which retrieves and delivers the web pages in response to requests from the website’s users. Apache is the most commonly used web server software (according to Netcraft statistics) and Microsoft‘s IIS is also commonly used. Some alternatives, such as LighttpdHiawatha or Cherokee, are fully functional and lightweight.

Static website

Main article: Static web page

A static website is one that has web pages stored on the server in the format that is sent to a client web browser. It is primarily coded in Hypertext Markup Language (HTML); Cascading Style Sheets (CSS) are used to control appearance beyond basic HTML. Images are commonly used to effect the desired appearance and as part of the main content. Audio or video might also be considered “static” content if it plays automatically or is generally non-interactive.

This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software. Simple forms or marketing examples of websites, such as classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services through text, photos, animations, audio/video, and navigation menus.

Static web sites can be edited using four broad categories of software:

  • Text editors, such as Notepad or TextEdit, where content and HTML markup are manipulated directly within the editor program
  • WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver (previously Macromedia Dreamweaver), with which the site is edited using a GUI and the final HTML markup is generated automatically by the editor software
  • WYSIWYG online editors which create media rich online presentation like web pages, widgets, intro, blogs, and other documents.
  • Template-based editors, such as RapidWeaver and iWeb, which allow users to quickly create and upload web pages to a web server without detailed HTML knowledge, as they pick a suitable template from a palette and add pictures and text to it in a desktop publishing fashion without direct manipulation of HTML code

Static websites may still use server side includes (SSI) as an editing convenience, such as sharing a common menu bar across many pages. As the site’s behaviour to the reader is still static, this is not considered a dynamic site.

Dynamic website

Main article: Dynamic web page

A dynamic website is one that changes or customizes itself frequently and automatically.

Server-side dynamic pages are generated “on the fly” by computer code that produces the HTML and CSS. There are a wide range of software systems, such as CGIJava Servlets and Java Server Pages (JSP), Active Server Pages and ColdFusion (CFML) that are available to generate dynamic web systems and dynamic sites. Various web application frameworks and web template systems are available for general-use programming languages like PHPPerlPython, and Ruby, to make it faster and easier to create complex dynamic web sites.

A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. For example, when the front page of a news site is requested, the code running on the web server might combine stored HTML fragments with news stores retrieved from a database or another web site via RSS to produce a page that includes the latest information. Dynamic sites can be interactive by using HTML forms, storing and reading back browser cookies, or by creating a series of pages that reflect the previous history of clicks. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request, e.g. for the keyword Beatles. In response, the content of the web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CDs, DVDs and books.

Dynamic HTML uses JavaScript code to instruct the web browser how to interactively modify the page contents.

One way to simulate a certain type of dynamic web site while avoiding the performance loss of initiating the dynamic engine on a per-user or per-connection basis, is to periodically automatically regenerate a large series of static pages.

Multimedia and interactive content

Early web sites had only text, and soon after, images. Web browser plug ins were then used to add audio, video, and interactivity (such as for a rich Internet application that mirrors the complexity of a desktop application like a word processor). Examples of such plug-ins are Microsoft SilverlightAdobe FlashAdobe Shockwave, and applets written in JavaHTML 5 includes provisions for audio and video without plugins. JavaScript is also built into most modern web browsers, and allows for web site creators to send code to the web browser that instructs it how to interactively modify page content and communicate with the web server if needed. (The browser’s internal representation of the content is known as the Document Object Model (DOM) and the technique is known as Dynamic HTML.)

Spelling

The form “website” has become the most common spelling, but “Web site” (capitalised) and “web site” are also widely used, though declining. Some academia, some large book publishers, and some dictionaries still use “Web site”, reflecting the origin of the term in the proper name World Wide Web. There has also been similar debate regarding related terms such as web pageweb server, and webcam.

Among leading style guides, the Reuters style guide,[5] The Chicago Manual of Style,[6] and the AP Stylebook (since April 2010)[7] all recommend “website”.

Among leading dictionaries and encyclopedias, the Canadian Oxford Dictionary prefers “website”, and the Oxford English Dictionary changed to “website” in 2004.[8] Wikipedia also uses “website”, but Encyclopædia Britannica uses both “Web site” and “Website”.[9] Britannica’s Merriam-Webster subsidiary uses “Web site”, recognising “website” as a variant.[10]

Among leading language-usage commentators, Garner’s Modern American Usage acknowledges that “website” is the standard form,[11] but Bill Walsh, of The Washington Post, argues for using “Web site” in his books and on his website[12] (however, The Washington Post itself uses “website”[13]).

Among major Internet technology companies and corporations, Google uses “website”,[14] as does Apple,[15] though Microsoft uses both “website” and “web site”.[16][17][18]

Types of websites

Websites can be divided into two broad categories – static and interactive. Interactive sites are part of the Web 2.0 community of sites, and allow for interactivity between the site owner and site visitors. Static sites serve or capture information but do not allow engagement with the audience directly.

Some web sites are informational or produced by enthusiasts or for personal use or entertainment. Many web sites do aim to make money, using one or more business models, including:

  • Posting interesting content and selling contextual advertising either through direct sales or through an advertising network.
  • E-commerce – products or services are purchased directly through the web site
  • Advertise products or services available at a brick and mortar business
  • Freemium – basic content is available for free but premium content is paid

There are many varieties of websites, each specializing in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include:

[hide]Click “show” or “hide” to toggle this table
Type of Website Description Examples
Affiliate A site, typically few in pages, whose purpose is to sell a third party’s product. The seller receives a commission for facilitating the sale.
Affiliate Agency Enabled portal that renders not only its custom CMS but also syndicated content from other content providers for an agreed fee. There are usually three relationship tiers. Affiliate Agencies (e.g., Commission Junction), Advertisers (e.g., eBay) and consumer (e.g., Yahoo!).
Archive site Used to preserve valuable electronic content threatened with extinction. Two examples are: Internet Archive, which since 1996 has preserved billions of old (and new) web pages; and Google Groups, which in early 2005 was archiving over 845,000,000 messages posted to Usenetnews/discussion groups. Internet ArchiveGoogle Groups
Attack site A site created specifically to attack visitors’ computers on their first visit to a website by downloading a file (usually a trojan horse). These websites rely on unsuspecting users with poor anti-virus protection in their computers.
Blog (web log) Sites generally used to post online diaries which may include discussion forums (e.g., bloggerXanga). Many bloggers use blogs like an editorial section of a newspaper to express their ideas on anything ranging from politics to religion to video games to parenting, along with anything in between. Some bloggers are professional bloggers and they are paid to blog about a certain subject, and they are usually found on news sites. WordPress
Brand building site A site with the purpose of creating an experience of a brand online. These sites usually do not sell anything, but focus on building the brand. Brand building sites are most common for low-value, high-volume fast moving consumer goods (FMCG).
Celebrity website A website whose information revolves around a celebrity. This sites can be official (endorsed by the celebrity) or fan made (run by his/her fan, fans, without implicit endorsement). jimcarrey.com
Click-to-donate site A website that allows the visitor to donate to charity simply by clicking on a button or answering a question correctly. An advertiser usually donates to the charity for each correct answer generated. The Hunger SiteFreericeRipple (charitable organisation)
Community site A site where persons with similar interests communicate with each other, usually by chat or message boards. MyspaceFacebookorkut
Content site Sites whose business is the creation and distribution of original content (e.g., SlateAbout.com).
Classified Ads site Sites publishing classified advertisements gumtree.com
Corporate website Used to provide background information about a business, organization, or service.
Dating website A site where users can find other single people looking for long range relationships, dating, or just friends. Many of them are pay per services such as eHarmony andMatch.com, but there are many free or partially free dating sites. Most dating sites today have the functionality of social networking websites.
Electronic commerce(e-commerce) site A site offering goods and services for online sale and enabling online transactions for such sales.
Forum website A site where people discuss various topics.
Gallery Website A website designed specifically for use as a Gallery, these may be an art gallery or photo gallery and of commercial or non-commercial nature.
Government Site A website made by the local, state, department or national government of a country. Usually these sites also operate websites that are intended to inform tourists or support tourism. For example, Richmond.com is the geodomain for Richmond, Virginia.
Gripe site A site devoted to the criticism of a person, place, corporation, government, or institution.
Gaming website
Gambling website
A site that lets users play online games. Some enable people to gamble online.
Humor site Satirizes, parodies or otherwise exists solely to amuse.
Information site Most websites could fit in this type of website to some extent many of them are not necessarily for commercial purposes RateMyProfessors.com, Free Internet Lexicon and Encyclopedia. Most government, educational and nonprofit institutions have an informational site.
Media sharing site A site that enables users to upload and view media such as picturesmusic, and videos FlickrYouTubeGoogle Videos
Mirror site A website that is the replication of another website. This type of websites are used as a response to spikes in user visitors. Mirror sites are most commonly used to provide multiple sources of the same information, and are of particular value as a way of providing reliable access to large downloads.
Microblog site A short and simple form of blogging. Microblogs are limited to certain amounts of characters and works similar to a status update on Facebook Twitter
News site Similar to an information site, but dedicated to dispensing news, politics, and commentary. cnn.com
Personal website Websites about an individual or a small group (such as a family) that contains information or any content that the individual wishes to include. Such a personal website is different from a Celebrity website, which can be very expensive and run by a publicist or agency.
Phishing site a website created to fraudulently acquire sensitive information, such as passwords and credit card details, by masquerading as a trustworthy person or business (such as Social Security AdministrationPayPal) in an electronic communication (see Phishing).
p2p/Torrents website Websites that index torrent files. This type of website is different from a Bit torrent client which is usually a stand alone software. MininovaThe Pirate BayIsoHunt
Political site A site on which people may voice political views, show political humor, campaigning for elections, or show information about a certain political party or ideology.
Porn site A site that shows sexually explicit content for enjoyment and relaxation. They can be similar to a personal website when it’s a website of a porn actor/actress or a media sharing website where user can upload from their own sexually explicit material to movies made by adult studios.
Question and Answer (Q&A) Site Answer site is a site where people can ask questions & get answers. Yahoo! AnswersStack Exchange Network (including Stack Overflow)
Rating site A site on which people can praise or disparage what is featured.
Religious site A site in which people may advertise a place of worship, or provide inspiration or seek to encourage the faith of a follower of that religion.
Review site A site on which people can post reviews for products or services.
School site a site on which teachers, students, or administrators can post information about current events at or involving their school. U.S. elementary-high school websites generally use k12 in the URL
Scraper site a site which largely duplicates without permission the content of another site, without actually pretending to be that site, in order to capture some of that site’s traffic (especially from search engines) and profit from advertising revenue or in other ways.
Search engine site A website that indexes material on the Internet or an intranet (and lately on traditional media such as books and newspapers)and provides links to information as a response to a query. Google SearchBingGoodSearchDuckDuckGo
Shock site Includes images or other material that is intended to be offensive to most viewers Goatse.cxrotten.com
Showcase site Web portals used by individuals and organisations to showcase things of interest or value
Social bookmarkingsite A site where users share other content from the Internet and rate and comment on the content. StumbleUpon and Digg are examples.
Social networkingsite A site where users could communicate with one another and share media, such as pictures, videos, music, blogs, etc. with other users. These may include games and web applications. FacebookOrkutGoogle+
Warez A site designed to host or link to materials such as music, movies and software for the user to download.
Webmail A site that provides a webmail service. HotmailGmailYahoo!
Web portal A site that provides a starting point or a gateway to other resources on the Internet or an intranet. msn.commsnbc.comyahoo
Wiki site A site which users collaboratively edit its content. WikipediaWikiHowWikia

Some websites may be included in one or more of these categories. For example, a business website may promote the business’s products, but may also host informative documents, such aswhite papers. There are also numerous sub-categories to the ones listed above. For example, a porn site is a specific type of e-commerce site or business site (that is, it is trying to sell memberships for access to its site) or have social networking capabilities. A fansite may be a dedication from the owner to a particular celebrity.

Websites are constrained by architectural limits (e.g., the computing power dedicated to the website). Very large websites, such as Facebook, Yahoo!, Microsoft, and Google employ many servers and load balancing equipment such as Cisco Content Services Switches to distribute visitor loads over multiple computers at multiple locations. As of early 2011, Facebook utilized 9 data centers with approximately 63,000 servers.

In February 2009, Netcraft, an Internet monitoring company that has tracked Web growth since 1995, reported that there were 215,675,903 websites with domain names and content on them in 2009, compared to just 19,732 websites in August 1995.[19]

Awards

The Webby AwardsFavourite Website Awards, Interactive Media Awards and WebAwards are prominent award organizations recognizing the world’s best websites.

See also

Internet

Posted: January 16, 2014 in Internet
Tags:

The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to serve several billion users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW), the infrastructure to support email, and peer-to-peer networks.

Most traditional communications media including telephone, music, film, and television are being reshaped or redefined by the Internet, giving birth to new services such as voice over Internet Protocol (VoIP) and Internet Protocol television (IPTV). Newspaper, book and other print publishing are adapting to website technology, or are reshaped into blogging and web feeds. The Internet has enabled and accelerated new forms of human interactions through instant messaging, Internet forums, and social networkingOnline shopping has boomed both for major retail outlets and smallartisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.

The origins of the Internet reach back to research commissioned by the United States government in the 1960s to build robust, fault-tolerant communication via computer networks. While this work, together with work in the United Kingdom and France, led to important precursor networks, they were not the Internet. There is no consensus on the exact date when the modern Internet came into being, but sometime in the early to mid-1980s is considered reasonable.

The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. Though the Internet has been widely used by academia since the 1980s, the commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of June 2012, more than 2.4 billion people—over a third of the world’s human population—have used the services of the Internet; approximately 100 times more people than were using it in 1995.[1][2]

The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and theDomain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with

Internet
Visualization of Internet routing paths

Research into packet switching started in the early 1960s and packet switched networks such as Mark I at NPL in the UK,[8] ARPANETCYCLADES,[9][10]Merit Network,[11] Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, where multiple separate networks could be joined together into a network of networks.[citation needed]

The first two nodes of what would become the ARPANET were interconnected between Leonard Kleinrock‘s Network Measurement Center at the UCLA’s School of Engineering and Applied Science and Douglas Engelbart’s NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969.[12]The third site on the ARPANET was the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was theUniversity of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.[13][14] These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

Early international collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25networks.[15] Notable exceptions were the Norwegian Seismic Array (NORSAR) in June 1973,[16] followed in 1973 by Sweden with satellite links to the TanumEarth Station and Peter T. Kirstein‘s research group in the UK, initially at the Institute of Computer ScienceUniversity of London and later at University College London.[citation needed]

In December 1974, RFC 675 – Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the terminternet as a shorthand for internetworking and later RFCs repeat this use.[17] Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networks called the Internet was introduced.

T3 NSFNET Backbone, c. 1992

TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNET) provided access tosupercomputer sites in the United States from research and education organizations, first at 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[18] Commercial Internet service providers (ISPs) began to emerge in the late 1980s and early 1990s. The ARPANET was decommissioned in 1990. The Internet was fully commercialized in the U.S. by 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic.[19] The Internet started a rapid expansion to Europe and Australia in the mid to late 1980s[20][21] and to Asia in the late 1980s and early 1990s.[22]

Since the mid-1990s the Internet has had a tremendous impact on culture and commerce, including the rise of near instant communication by email, instant messagingVoice over Internet Protocol (VoIP) “phone calls”, two-way interactive video calls, and theWorld Wide Web[23] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more.

Worldwide Internet users
2005 2010 2013a
World population[24] 6.5 billion 6.9 billion 7.1 billion
Not using the Internet 84% 70% 61%
Using the Internet 16% 30% 39%
Users in the developing world 8% 21% 31%
Users in the developed world 51% 67% 77%
a Estimate.
Source: International Telecommunications Union.[25]

The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.[26] During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[27] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[28] As of 31 March 2011, the estimated total number of Internet users was 2.095 billion (30.2% of world population).[29] It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[30]

Technology

Protocols

As the user data is processed down through the protocol stack, each layer adds an encapsulation at the sending host. Data is transmitted “over the wire” at the link level, left to right. The encapsulation stack procedure is reversed by the receiving host. Intermediate relays remove and add a new link encapsulation for retransmission, and inspect the IP layer for routing purposes.

Internet protocol suite
Application layer
Transport layer
Internet layer
Link layer

The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[31] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site.

The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.

The Internet standards describe a framework known as the Internet protocol suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the application layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program uses the client-server application model and many file-sharing systems use a peer-to-peer paradigm. Below this top layer, the transport layer connects applications on different hosts via the network with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers.

The internet layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one another via intermediate (transit) networks. Last, at the bottom of the architecture, is a software layer, the link layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware, which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description or implementation; many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.

The most prominent component of the Internet model is the Internet Protocol (IP), which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and in essence establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of today’s Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,[32] when the global address allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.[33]

IPv6 is not interoperable with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for networking devices that need to communicate on both networks. Most modern computer operating systems already support both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Routing

Internet packet routing is accomplished among various tiers of Internet service providers.

Internet service providers connect customers, which represent the bottom of the routing hierarchy, to customers of other ISPs via other higher or same-tier networks. At the top of the routing hierarchy are the Tier 1 networks, large telecommunication companies which exchange traffic directly with all other Tier 1 networks via peering agreements.Tier 2 networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implementmultihoming to achieve redundancy. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs.

Computers and routers use routing tables to direct IP packets to the next-hop router or destination. Routing tables are maintained by manual configuration or by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.

Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect into large subnetworks such as GEANTGLORIADInternet2, and the UK’s national research and education networkJANET.

General structure

The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.[34]

Many computer scientists describe the Internet as a “prime example of a large-scale, highly engineered, yet highly complex system”.[35] The Internet is heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits “emergent phenomena” that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins in the 1960s when the eventual scale and popularity of the network could not be anticipated.[36] Thus, the possibility of developing alternative structures is investigated.[37] The Internet structure was found to be highly robust[38] to random failures and very vulnerable to high degree attacks.[39]

Governance

Main article: Internet governance

ICANN headquarters in Marina Del Rey, California, United States

The Internet is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body.

The technical underpinning and standardization of the Internet’s core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force(IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers(ICANN), headquartered in Marina del Rey, California. ICANN is the authority that coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces, in which names and numbers are uniquely assigned, are essential for maintaining the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN’s role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body for the global Internet.[40]

Allocation of IP addresses is delegated to Regional Internet Registries (RIRs):

The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, continues to have final approval over changes to the DNS root zone.[41][42][43]

The Internet Society (ISOC) was founded in 1992, with a mission to “assure the open development, evolution and use of the Internet for the benefit of all people throughout the world”.[44] Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG).

On 16 November 2005, the United Nations-sponsored World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Modern uses

The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacardshandheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods.

Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and theWorld Wide Web in particular are important enablers of both formal and informal education.

The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, LinuxMozilla Firefox, and OpenOffice.org. Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking website, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members.

Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other’s work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread.

The Internet allows computer users to remotely access other computers and information stores easily, wherever they may be. They may do this with or without computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure Virtual Private Network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare,[45] because it extends the secure perimeter of a corporate network into remote locations and its employees’ homes.

Services

World Wide Web

This NeXT Computer was used by Tim Berners-Lee at CERN and became the world’s first Web server.

Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is only one of hundreds of services used on the Internet. The Web is a global set of documentsimages and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs symbolically identify services, servers, and other databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

World Wide Web browser software, such as Microsoft’s Internet ExplorerMozilla FirefoxOperaApple‘s Safari, and Google Chrome, lets users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, textvideomultimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, gamesoffice applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.

The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. Publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.

One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public’s interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and Twitter currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.

When the Web began in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, ready to be sent to a user’s browser in response to a request. Over time, the process of creating and serving web pages has become more automated and more dynamic. Websites are often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Communication

Email is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Pictures, documents and other files are sent as email attachments. Emails can be cc-ed to multiple email addresses.

Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.

Voice quality can still vary from call to call, but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Older traditional phones with no “extra features” may be line-powered only and operate during a power failure; VoIP can never do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Modern video game consoles also offer VoIP chat features.

Data transfer

File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a “shared location” or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of “mirror” servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed – usually fully encrypted – across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.

Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet “broadcasters” who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where – usually audio – material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.

Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.[46]

Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.[47]

Access

Main article: Internet access

Common methods of Internet access in homes include dial-up, landline broadband (over coaxial cablefiber optic or copper wires), Wi-Fisatellite and 3G/4G technology cell phones. Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as “public Internet kiosk”, “public access terminal”, and “Web payphone“. Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled.

Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, ViennaToronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.[48] Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as smartphones in general come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used.[49] An Internet access provider and protocol matrix differentiates the methods used to get online.

An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[50] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93%[51] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.[52]

Users

Internet users per 100 inhabitantsSource: International Telecommunications Union.[53][54]

Overall Internet usage has seen tremendous growth. From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion.[57] By 2010, 22 percent of the world’s population had access to computers with 1 billion Googlesearches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube.[58]

The prevalent language for communication on the Internet has been English. This may be a result of the origin of the Internet, as well as the language’s role as a lingua franca. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.

After English (27%), the most requested languages on the World Wide Web are Chinese (23%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[59] By region, 42% of the world’s Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbeantaken together, 6% in Africa, 3% in the Middle East and 1% in Australia/Oceania.[60] The Internet’s technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world’s widely used languages. However, some glitches such as mojibake (incorrect display of some languages’ characters) still remain.

In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.[61] More recent studies indicate that in 2008, women significantly outnumbered men on most social networking sites, such as Facebook and Myspace, although the ratios varied with age.[62] In addition, women watched more streaming content, whereas men downloaded more.[63] In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.[64]

According to Euromonitor, by 2020 43.7% of the world’s population will be users of the Internet. Splitting by country, in 2011 Iceland, Norway and the Netherlands had the highest Internet penetration by the number of users, with more than 90% of the population with access.

Social impact

The Internet has enabled entirely new forms of social interaction, activities, and organizing, thanks to its basic features such as widespread usability and access.

Social networking and entertainment

Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to find out more about their interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. The Internet has seen a growing number of Web desktops, where users can access their files and settings via the Internet.

Social networking websites such as FacebookTwitter, and MySpace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedInfoster commercial and business connections. YouTube and Flickr specialize in users’ videos and photographs.

The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDsand MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Today, manyInternet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. The Internet pornography and online gambling industries have taken advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites.[65] Although many governments have attempted to restrict both industries’ use of the Internet, in general this has failed to stop their widespread popularity.[66]

Another area of leisure activity on the Internet is multiplayer gaming.[67] This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer.[68] Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists’ copyrights than others.

Internet usage has been correlated to users’ loneliness.[69] Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the “I am lonely will anyone speak to me” thread.

Cybersectarianism is a new organizational form which involves: “highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in collective study via email, on-line chat rooms and web-based message boards.”[70]

Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[71] Internet addiction disorder is excessive computer use that interferes with daily life. Psychologist Nicolas Carr believe that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity.[72]

Electronic business

Main article: Electronic business

Electronic business (E-business) involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners.

According to research firm IDC, the size of total worldwide e-commerce, when global business-to-business and -consumer transactions are added together, will equate to $16 trillion in 2013.IDate, another research firm, estimates the global market for digital products and services at $4.4 trillion in 2013. A report by Oxford Economics adds those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales.[73]

While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide.[74] Electronic commerce may be responsible for consolidation and the decline of mom-and-popbrick and mortar businesses resulting in increases in income inequality.[75][76][77]

Telecommuting

Main article: Telecommuting

Remote work is facilitated by tools such as groupwarevirtual private networksconference callingvideoconferencing, and Voice over IP (VOIP). It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. As broadband Internet connections become more commonplace, more and more workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal phone networks.

Crowdsourcing

Main article: Crowdsourcing

Internet provides a particularly good venue for crowdsourcing (outsourcing tasks to a distributed group of people) since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized and thus can feel more comfortable sharing.

Crowdsourcing systems are used to accomplish a variety of tasks. For example, the crowd may be invited to develop a new technology, carry out a design task, refine or carry out the steps of an algorithm (see human-based computation), or help capture, systematize, or analyze large amounts of data (see also citizen science).

Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries.[78] In those settings, they have been found useful for collaboration on grant writingstrategic planning, departmental documentation, and committee work.[79] The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park.[80]

The English Wikipedia has the largest user base among wikis on the World Wide Web[81] and ranks in the top 10 among all Web sites in terms of traffic.[82]

Politics and political revolutions

The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing in order to carry out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring.[83][84]

The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt where it helped certain classes of protesters organize protests, communicate grievances, and disseminate information.[85]

The potential of the Internet as a civic tool of communicative power was thoroughly explored by Simon R. B. Berdal in his thesis of 2004:

As the globally evolving Internet provides ever new access points to virtual discourse forums, it also promotes new civic relations and associations within which communicative power may flow and accumulate. Thus, traditionally … national-embedded peripheries get entangled into greater, international peripheries, with stronger combined powers… The Internet, as a consequence, changes the topology of the “centre-periphery” model, by stimulating conventional peripheries to interlink into “super-periphery” structures, which enclose and “besiege” several centres at once.[86]

Berdal, therefore, extends the Habermasian notion of the Public sphere to the Internet, and underlines the inherent global and civic nature that intervowen Internet technologies provide. To limit the growing civic potential of the Internet, Berdal also notes how “self-protective measures” are put in place by those threatened by it:

If we consider China’s attempts to filter “unsuitable material” from the Internet, most of us would agree that this resembles a self-protective measure by the system against the growing civic potentials of the Internet. Nevertheless, both types represent limitations to “peripheral capacities”. Thus, the Chinese government tries to prevent communicative power to build up and unleash (as the 1989 Tiananmen Square uprising suggests, the government may find it wise to install “upstream measures”). Even though limited, the Internet is proving to be an empowering tool also to the Chinese periphery: Analysts believe that Internet petitions have influenced policy implementation in favour of the public’s online-articulated will …[86]

Philanthropy

The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice.

A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.[87][88]

However, the recent spread of low cost Internet access in developing countries has made genuine international person-to-person philanthropy increasingly feasible. In 2009 the US-based nonprofit Zidisha tapped into this trend to offer the first person-to-person microfinance platform to link lenders and borrowers across international borders without intermediaries. Members can fund loans for as little as a dollar, which the borrowers then use to develop business activities that improve their families’ incomes while repaying loans to the members with interest. Borrowers access the Internet via public cybercafes, donated laptops in village schools, and even smart phones, then create their own profile pages through which they share photos and information about themselves and their businesses. As they repay their loans, borrowers continue to share updates and dialogue with lenders via their profile pages. This direct web-based connection allows members themselves to take on many of the communication and recording tasks traditionally performed by local organizations, bypassing geographic barriers and dramatically reducing the cost of microfinance services to the entrepreneurs.[89]

Surveillance

The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet.[90] In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies.[91][92][93]

Packet capture (also sometimes referred to as “packet sniffing”) is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called “packets”, which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete “message” again. Packet Capture Appliance intercepts these packets as they are travelling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers “messages” but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers’ broadband Internetand voice over Internet protocol (VoIP) traffic.[94]

There is far too much data gathered by these packet sniffers for human investigators to manually search through all of it. So automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, and filter out and report to human investigators those bits of information which are “interesting”—such as the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group.[95] Billions of dollars per year are spent, by agencies such as the Information Awareness OfficeNSA, and the FBI, to develop, purchase, implement, and operate systems which intercept and analyze all of this data, and extract only the information which is useful to law enforcement and intelligence agencies.[96]

Similar systems are now operated by Iranian secret police to identify and suppress dissidents. All required hardware and software has been allegedly installed by German Siemens AG and Finnish Nokia.[97]

Censorship

Internet censorship by country[98][99][100]

  Pervasive censorship
  Substantial censorship
  Selective censorship
  Changing situation
  Little or no censorship
  Not classified / no data

Some governments, such as those of BurmaIranNorth Korea, the Mainland ChinaSaudi Arabia, and the United Arab Emirates restrict what people in their countries can access on the Internet, especially political and religious content. This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.[101]

In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily, possibly to avoid such an arrangement being turned into law, agreed to restrict access to sites listed by authorities. While this list of forbidden URLs is supposed to contain addresses of only known child pornography sites, the content of the list is secret.[102]Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filtering software. There are many free and commercially available software programs, called content-control software, with which a user can choose to block offensive websites on individual computers or networks, in order to limit a child’s access to pornographic materials or depiction of violence.

by contributing technical expertise.

 

Facebook

Posted: January 16, 2014 in Facebook
Tags:

Facebook

is an online social networking service. Its name comes from a colloquialism for the directory given to students at some American universities.[7] Facebook was founded in February 2004 by Mark Zuckerberg with his college roommates and fellow Harvard University studentsEduardo SaverinAndrew McCollumDustin Moskovitz and Chris Hughes.[8] The founders had initially limited the website’s membership to Harvard students, but later expanded it to colleges in the Boston area, the Ivy League, and Stanford University. It gradually added support for students at various other universities before it opened to high-school students, and eventually to anyone aged 13 and over. Facebook now allows anyone who claims to be at least 13 years old to become a registered user of the website.[9]

Users must register before using the site, after which they may create a personal profile, add other users as friends, exchange messages, and receive automatic notifications when they update their profile. Additionally, users may join common-interest user groups, organized by workplace, school or college, or other characteristics, and categorize their friends into lists such as “People From Work” or “Close Friends”. As of September 2012, Facebook has over one billion active users,[10] of which 8.7% are fake.[11] Facebook (as of 2012) has about 180 petabytes of data per year and grows by over half a petabyte every 24 hours.[12]

In May 2005, Accel partners invested $12.7 million in Facebook, and Jim Breyer[13] added $1 million of his own money. A January 2009Compete.com study ranked Facebook the most used social networking service by worldwide monthly active users.[14] Entertainment Weeklyincluded the site on its end-of-the-decade “best-of” list, saying, “How on earth did we stalk our exes, remember our co-workers’ birthdays, bug our friends, and play a rousing game of Scrabulous before Facebook?”[15] Facebook eventually filed for an initial public offering on February 1, 2012; it is headquartered in Menlo Park, California.[2] Facebook Inc. began selling stock to the public and trading on the NASDAQ on May 18, 2012.[16] Based on its 2012 income of US$5.1 billion, Facebook joined the Fortune 500 list for the first time on the list published in May 2013, being placed at position 462.[17]

In 2012, Facebook was valued at $104 billion.[18] As of January 2014, the company has about 1.15 billion monthly users.[19]

Management

The ownership percentages of the company, as of 2012, are: Mark Zuckerberg: 28%,[87] Accel Partners: 10%, Digital Sky Technologies: 10%,[88] Dustin Moskovitz: 6%, Eduardo Saverin: 5%,Sean Parker: 4%, Peter Thiel: 3%, Greylock Partners and Meritech Capital Partners: between 1 to 2% each, Microsoft: 1.3%, Li Ka-shing: 0.8%, the Interpublic Group: less than 0.5%. A small group of current and former employees and celebrities own less than 1% each, including Matt Cohler, Jeff Rothschild, Adam D’Angelo, Chris Hughes, and Owen Van Natta,while Reid Hoffmanand Mark Pincus have sizable holdings of the company. The remaining 30% or so are owned by employees, an undisclosed number of celebrities, and outside investors.[89] Adam D’Angelo, former chief technology officer and friend of Zuckerberg, resigned in May 2008. Reports claimed that he and Zuckerberg began quarreling, and that he was no longer interested in partial ownership of the company.[90]

Key management personnel comprise Chris Cox (VP of Product), Sheryl Sandberg (COO), and Mark Zuckerberg (Chairman and CEO). As of April 2011, Facebook has over 2,000 employees, and offices in 15 countries.[91] Other managers include chief financial officer David Ebersman and public relations head Elliot Schrage.[92]

Facebook was named the 5th best company to work for in 2014 by company-review site Glassdoor as part of its sixth annual Employees’ Choice Awards. The website stated that 93% of Facebook employees would recommend the company to a friend.[93]

Revenue

Most of Facebook’s revenue comes from advertising.[94][95]

Revenues
(estimated, in millions US$)
Year Revenue Growth
2006 $52[96]
2007 $150[97] 188%
2008 $280[98] 87%
2009 $775[99] 177%
2010 $2,000[100] 158%
2011 $4,270[101] 114%

Facebook generally has a lower clickthrough rate (CTR) for advertisements than most major Web sites. According to BusinessWeek.com, banner advertisements on Facebook have generally received one-fifth the number of clicks compared to those on the Web as a whole,[102] although specific comparisons can reveal a much larger disparity. For example, while Google users click on the first advertisement for search results an average of 8% of the time (80,000 clicks for every one million searches),[103] Facebook’s users click on advertisements an average of 0.04% of the time (400 clicks for every one million pages).[104]

Sarah Smith, who was Facebook’s Online Sales Operations Manager, reports that successful advertising campaigns on the site can have clickthrough rates as low as 0.05% to 0.04%, and that CTR for ads tend to fall within two weeks.[105] By comparison, the CTR for competing social network MySpace is about 0.1%, about 2.5 times better than Facebook’s rate but still low compared to many other Web sites. The cause of Facebook’s low CTR has been attributed to younger users enabling ad blocking software and being better at ignoring advertising messages, as well as the site being used more for the purpose of social communication as opposed to viewing content.[106]

On pages for brands and products, however, some companies have reported CTR as high as 6.49% for Wall posts.[107] A study found that, for video advertisements on Facebook, over 40% of users who viewed the videos viewed the entire video, while the industry average was 25% for in-banner video ads.[108]

Mergers and acquisitions

On November 15, 2010, Facebook announced it had acquired the domain name fb.com from the American Farm Bureau Federation for an undisclosed amount. On January 11, 2011, the Farm Bureau disclosed $8.5 million in “domain sales income”, making the acquisition of FB.com one of the ten highest domain sales in history.[109]

Offices

Entrance to Facebook headquarters complex in Menlo Park, California

Entrance to Facebook’s previous headquarters in the Stanford Research Park,Palo AltoCalifornia

In early 2011, Facebook announced plans to move to its new headquarters, the former Sun Microsystems campus in Menlo ParkCalifornia.

All users outside of the US and Canada have a contract with Facebook’s Irish subsidiary “Facebook Ireland Limited”. This allows Facebook to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Facebook is making use of the Double Irish arrangement which allows it to pay just about 2-3% corporation tax on all international revenue.[110]

In 2010, Facebook opened its fourth office, in Hyderabad[111][112][113] and the first in Asia.[114]

Facebook, which in 2010 had more than 750 million active users globally including over 23 million in India, announced that its Hyderabad centre would house online advertising and developer support teams and provide round-the-clock, multi-lingual support to the social networking site’s users and advertisers globally.[115] With this, Facebook joins other giants like GoogleMicrosoftOracleDellIBM and Computer Associates that have already set up shop.[116] In Hyderabad, it is registered as ‘Facebook India Online Services Pvt Ltd’.[117][118][119]

Though Facebook did not specify its India investment or hiring figures, it said recruitment had already begun for a director of operations and other key positions at Hyderabad,[120] which would supplement its operations in CaliforniaDublin in Ireland as well as at AustinTexas.

A custom-built data center with substantially reduced (“38% less”) power consumption compared to existing Facebook data centers opened in April 2011 in Prineville, Oregon.[121] In April 2012, Facebook opened a second data center in Forest City, North Carolina, US.[122]

On October 1, 2012, CEO Zuckerberg visited Moscow to stimulate social media innovation in Russia and to boost Facebook’s position in the Russian market.[123] Russia’s communications minister tweeted that Prime Minister Dmitry Medvedev urged the social media giant’s founder to abandon plans to lure away Russian programmers and instead consider opening a research center in Moscow. Facebook has roughly 9 million users in Russia, whiledomestic analogue VK has around 34 million.[124]

The functioning of a woodwork facility on the Menlo Park campus was announced at the end of August 2013. The facility, opened in June 2013, provides equipment, safety courses and woodwork learning course, while employees are required to purchase materials at the in-house store. A Facebook spokesperson explained that the intention of the facility is to encourage employees to think in an innovative manner due to the different environment, and also serves as an attractive perk for prospective employees.[125]

Messaging

A new Messaging platform, codenamed “Project Titan”, was launched on November 15, 2010. Described as a “Gmail killer” by some publications, the system allows users to directly communicate with each other via Facebook using several different methods (including a special email address, text messaging, or through the Facebook website or mobile app)—no matter what method is used to deliver a message, they are contained within single threads in a unified inbox. As with other Facebook features, users can adjust from whom they can receive messages from—including just friends, friends of friends, or from anyone.[172][173]

Aside from the Facebook website, Messages can also be accessed through the site’s mobile apps, or a dedicated Facebook Messenger app.[174]

Voice calls

Since April 2011, Facebook users have had the ability to make live voice calls via Facebook Chat, allowing users to chat with others from all over the world. This feature, which is provided free through T-Mobile’s new Bobsled service, lets the user add voice to the current Facebook Chat as well as leave voice messages on Facebook.[175]

Video calling

On July 6, 2011, Facebook launched its video calling services using Skype as its technology partner. It allows one-to-one calling using a Skype Rest API.

Following

On September 14, 2011, Facebook added the ability for users to provide a “Subscribe” button on their page, which allows users to subscribe to public postings by the user without needing to add them as a friend.[176] In conjunction, Facebook also introduced a system in February 2012 to verify the identity of certain accounts. Unlike a similar system used by Twitter, verified accounts do not display a special verification badge, but are given a higher priority in a user’s “Subscription Suggestions”.[177]

In December 2012, Facebook announced that due to user confusion surrounding its function, the Subscribe button would be re-labeled as a “Follow” button—making it more similar to other social networks with similar functions.[178]

Privacy

To allay concerns about privacy, Facebook enables users to choose their own privacy settings and choose who can see specific parts of their profile.[179] The website is free to users, and generates revenue from advertising, such as banner ads.[180] Facebook requires a user’s name and profile picture (if applicable) to be accessible by everyone. Users can control who sees other information they have shared, as well as who can find them in searches, through their privacy settings.[181]

According to comScore, an internet marketing research company, Facebook collects as much data from its visitors as Google and Microsoft, but considerably less than Yahoo!.[182] In 2010, the security team began expanding its efforts to reduce the risks to users’ privacy,[183] but privacy concerns remain.[184] On November 6, 2007, Facebook launched Facebook Beacon, which was an ultimately failed attempt advertise to friends of users using the knowledge of what purchases friends made. As of March 2012, Facebook’s usage of its user data is under close scrutiny.[185]

Since 2010 the National Security Agency has been taking Facebook profile information from users to discover who their allies, friends, and colleagues are.[186]

In August 2013 High-Tech Bridge published a study showing that links included in Facebook messaging service messages were being accessed by Facebook for its own purposes.[187] In January 2014 two users filed a lawsuit against Facebook alleging that their privacy had been violated by this practice.[18

Reception

According to comScore, Facebook is the leading social networking site based on monthly unique visitors, having overtaken main competitor MySpace in April 2008.[198] ComScore reports that Facebook attracted 130 million unique visitors in May 2010, an increase of 8.6 million people.[199] According to Alexa, the website’s ranking among all websites increased from 60th to 7th in worldwide traffic, from September 2006 to September 2007, and is currently 2nd.[200] Quantcast ranks the website 2nd in the U.S. in traffic,[201] and Compete.com ranks it 2nd in the U.S.[202]The website is the most popular for uploading photos, with 50 billion uploaded cumulatively.[203] In 2010, Sophos‘s “Security Threat Report 2010” polled over 500 firms, 60% of which responded that they believed that Facebook was the social network that posed the biggest threat to security, well ahead of MySpace, Twitter, and LinkedIn.[183]

Facebook is the most popular social networking site in several English-speaking countries, including Canada,[204] the United Kingdom,[205] and the United States.[206][207][208][209] However, Facebook still receives limited adoption in countries such as Japan, where domestically created social networks are still largely preferred.[210] In regional Internet markets, Facebook penetration is highest in North America (69 percent), followed by Middle East-Africa (67 percent), Latin America (58 percent), Europe (57 percent), and Asia-Pacific (17 percent).[211] Some of the top competitors were listed in 2007 by Mashable.[212]

The website has won awards such as placement into the “Top 100 Classic Websites” by PC Magazine in 2007,[213] and winning the “People’s Voice Award” from the Webby Awards in 2008.[214] In a 2006 study conducted by Student Monitor, a New Jersey-based company specializing in research concerning the college student market, Facebook was named the second most popular thing among undergraduates, tied with beer and only ranked lower than the iPod.[215]

On March 2010, Judge Richard Seeborg issued an order approving the class settlement in Lane v. Facebook, Inc.,[216] the class action lawsuit arising out of Facebook’s Beacon program.

In 2010, Facebook won the Crunchie “Best Overall Startup Or Product” for the third year in a row[217] and was recognized as one of the “Hottest Silicon Valley Companies” by Lead411.[218]However, in a July 2010 survey performed by the American Customer Satisfaction Index, Facebook received a score of 64 out of 100, placing it in the bottom 5% of all private-sector companies in terms of customer satisfaction, alongside industries such as the IRS e-file system, airlines, and cable companies. The reasons why Facebook scored so poorly include privacy problems, frequent changes to the website’s interface, the results returned by the News Feed, and spam.[219]

Total active users[N 1]
Date Users
(in millions)
Days later Monthly growth[N 2]
August 26, 2008 100[220] 1,665 178.38%
April 8, 2009 200[221] 225 13.33%
September 15, 2009 300[222] 160 9.38%
February 5, 2010 400[223] 143 6.99%
July 21, 2010 500[224] 166 4.52%
January 5, 2011 600[225][N 3] 168 3.57%
May 30, 2011 700[226] 145 3.45%
September 22, 2011 800[227] 115 3.73%
April 24, 2012 900[228] 215 1.74%
October 4, 2012 1,000[229] 163 2.04%
March 31, 2013 1,110[6] 178 1.67%

In December 2008, the Supreme Court of the Australian Capital Territory ruled that Facebook is a valid protocol to serve court notices to defendants. It is believed to be the world’s first legal judgement that defines a summons posted on Facebook as legally binding.[230] In March 2009, the New Zealand High Court associate justice David Gendall allowed for the serving of legal papers on Craig Axe by the company Axe Market Garden via Facebook.[231][232] Employers (such asVirgin Atlantic Airways) have also used Facebook as a means to keep tabs on their employees and have even been known to fire them over posts they have made.[233]

By 2005, the use of Facebook had already become so ubiquitous that the generic verb “facebooking” had come into use to describe the process of browsing others’ profiles or updating one’s own.[234] In 2008, Collins English Dictionary declared “Facebook” as its new Word of the Year.[235] In December 2009, the New Oxford American Dictionary declared its word of the year to be the verb “unfriend“, defined as “To remove someone as a ‘friend‘ on a social networking site such as Facebook. As in, ‘I decided to unfriend my roommate on Facebook after we had a fight.'”[236]

In early 2010, Openbook was established, an avowed parody (and privacy advocacy) website[237] that enables text-based searches of those Wall posts that are available to “Everyone”, i.e. to everyone on the Internet.

Writers for The Wall Street Journal found in 2010 that Facebook apps were transmitting identifying information to “dozens of advertising and Internet tracking companies”. The apps used an HTTP referrer which exposed the user’s identity and sometimes their friends’. Facebook said, “We have taken immediate action to disable all applications that violate our terms”.[238]

In January 2013, the countries with the most Facebook users were:[239]

  • United States with 168.8 million members
  • Brazil with 64.6 million members
  • India with 62.6 million members
  • Indonesia with 51.4 million members
  • Mexico with 40.2 million members

All of the above total 309 million members or about 38.6 percent of Facebook’s 1 billion worldwide members.[240] As of March 2013, Facebook reported having 1.11 billion monthly active users, globally.[241]

In regards to Facebook’s mobile usage, per an analyst report in early 2013, there are 192 million Android users, 147 million iPhone users, 48 million iPad users and 56 million messenger users, and a total of 604 million mobile Facebook users.[242]

  • Facebook popularity. Active users of Facebook increased from just a million in 2004 to over 750 million in 2011.[243]

  • Population pyramid of Facebook users by age as of January 1, 2010[244]

    Social impact

    Facebook has affected the social life and activity of people in various ways. With its availability on many mobile devices, Facebook allows users to continuously stay in touch with friends, relatives and other acquaintances wherever they are in the world, as long as there is access to the Internet. It can also unite people with common interests and/or beliefs through groups and other pages, and has been known to reunite lost family members and friends because of the widespread reach of its network.[277] One such reunion was between John Watson and the daughter he had been seeking for 20 years. They met after Watson found her Facebook profile.[278] Another father–daughter reunion was between Tony Macnauton and Frances Simpson, who had not seen each other for nearly 48 years.[279]

    Some argue that Facebook is beneficial to one’s social life because they can continuously stay in contact with their friends and relatives, while others say that it can cause increased antisocial tendencies because people are not directly communicating with each other. Some studies have named Facebook as a source of problems in relationships. Several news stories have suggested that using Facebook can lead to higher instances of divorce and infidelity, but the claims have been questioned by other commentators.[280]

    Health impact

    Many Facebook users, especially adolescents, display references to alcohol and substance use on their Facebook profiles.[281][282] One study of alcohol displays on underage college Facebook users found that 35.7% participant profiles displayed alcohol.[283] This can include photos of underage drinking, or status updates describing alcohol or substance use. This is particularly concerning because new social media such as Facebook can influence adolescents by acting as a “superpeer,” promoting norms of behavior among other adolescents.[284]Regardless of whether these displays represent real offline behavior or are posted just to make the Facebook user “look cool”, displaying these references may lead to an expectation by friends that the adolescent does or will drink alcohol in the future.[281]

    Facebook envy

    Unless you get out of Facebook and into someone’s face, you really have not acted.

    Recent studies have shown that Facebook causes negative effects on self-esteem by triggering feelings of envy, with vacation and holiday photos proving to be the largest resentment triggers. Other prevalent causes of envy include posts by friends about family happiness and images of physical beauty—such envious feelings leave people lonely and dissatisfied with their own lives. A joint study by two German universities discovered that one out of three people were more dissatisfied with their lives after visiting Facebook, and another study by Utah Valley University found that college students felt worse about their own lives following an increase in the amount of time spent on Facebook.[286][287][288]

    Political impact

    The stage at the Facebook – Saint Anselm College debates in 2008.

    Facebook’s role in the American political process was demonstrated in January 2008, shortly before the New Hampshire primary, when Facebook teamed up with ABC and Saint Anselm College to allow users to give live feedback about the “back to back” January 5 Republican and Democratic debates.[289][290][291] Charles Gibson moderated both debates, held at the Dana Center for the Humanities at Saint Anselm College. Facebook users took part in debate groups organized around specific topics, register to vote, and message questions.[292]

    ABCNews.com reported in 2012 that the Facebook fanbases of political candidates have relevance for the election campaign, including:

    • Allows politicians and campaign organizers to understand the interests and demographics of their Facebook fanbases, to better target their voters.
    • Provides a means for voters to keep up-to-date on candidates’ activities, such as connecting to the candidates’ Facebook Fan Pages.

    Over a million people installed the Facebook application “US Politics on Facebook” in order to take part, and the application measured users’ responses to specific comments made by the debating candidates.[293] This debate showed the broader community what many young students had already experienced: Facebook as a popular and powerful new way to interact and voice opinions. An article by Michelle Sullivan of Uwire.com illustrates how the “Facebook effect” has affected youth voting rates, support by youth of political candidates, and general involvement by the youth population in the 2008 election.[294]

    In February 2008, a Facebook group called “One Million Voices Against FARC” organized an event in which hundreds of thousands of Colombians marched in protest against the Revolutionary Armed Forces of Colombia, better known as the FARC (from the group’s Spanish name).[295] In August 2010, one of North Korea’s official government websites and the official news agency of the country, Uriminzokkiri, joined Facebook.[296]

    A man during the 2011 Egyptian protests carrying a card saying “Facebook,#jan25, The Egyptian Social Network”.

    In January 2011, Facebook played a major role in generating the first spark for the 2011 Egyptian revolution.[297][298] On January 14, the Facebook page of “We are all khaled Said” was started by Wael Ghoniem Create Event to invite the Egyptian people to “peaceful demonstrations” on January 25. As in Tunisia, Facebook become the primary tool for connecting all protesters, which lead the Egyptian government of Prime Minister Nazif to ban Facebook, Twitter and another websites on January 26[299] then ban all mobile and Internet connections for all of Egypt at midnight January 28. After 18 days, the uprising forced President Mubarak to resign.

    In 2011, Facebook filed paperwork with the Federal Election Commission to form a political action committee under the name FB PAC.[300] In an email to The Hill, a spokesman for Facebook said “FB PAC will give our employees a way to make their voice heard in the political process by supporting candidates who share our goals of promoting the value of innovation to our economy while giving people the power to share and make the world more open and connected.”[301]

    Unfriending psychological impact

    Although Facebook has an upside of friending people, there is also the downside of having someone unfriend or reject another person, according topsychologist Susan Krauss Whitbourne.[302] Whitbourne refers to unfriended persons on Facebook as victims of estrangement.[302] Unfriending someone is seldom a mutual decision and the person often does not know of being unfriended.[302]

    In popular culture

Google glasses

Posted: January 14, 2014 in Google glasses

300px-Google_Glass_Explorer_Edition

Google Glass is a wearable computer with an optical head-mounted display (OHMD) that is being developed by Google in the Project Glass research and development project,[8] with a mission of producing a mass-market ubiquitous computer.[1] Google Glass displays information in a smartphone-like hands-free format,[9] that can communicate with the Internet via natural language voice commands.[10][11]

While the frames do not currently have lenses fitted to them, Google is considering partnerships with sunglass retailers such as Ray-Ban or Warby Parker, and may also open retail stores to allow customers to try on the device.[1] The Explorer Edition cannot be used by people who wear prescription glasses, but Google has confirmed that Glass will eventually work with frames and lenses that match the wearer’s prescription; the glasses will be modular and therefore possibly attachable to normal prescription glasses.[12]

Glass is being developed by Google X,[13] which has worked on other futuristic technologies such as driverless cars. The project was announced on Google+ by Project Glass lead Babak Parviz, an electrical engineer who has also worked on putting displays into contact lenses; Steve Lee, a product manager and “geolocation specialist”; and Sebastian Thrun, who developed Udacity as well as worked on the autonomous car project.[14] Google has patented the design of Project Glass.[15][16] Thad Starner, an augmented reality expert, is a technical lead/manager on the project.[17]

Hardware[edit]

Hearing-aid-disruption-6

Camera[edit]

Google Glass has the ability to take photos and record 720p HD video. While video is recording, the screen stays on.

Touchpad[edit]

A man controls Google Glass using the touchpad built into the side of the device

A touchpad is located on the side of Google Glass, allowing users to control the device by swiping through a timeline-like interface displayed on the screen.[51] Sliding backward shows current events, such as weather, and sliding forward shows past events, such as phone calls, photos, circle updates, etc.

Technical specifications[edit]

For the developer Explorer units:

  • Android 4.0.4 and higher[4]
  • 640×360 display[6]
  • 5-megapixel camera, capable of 720p video recording[7]
  • Wi-Fi 802.11b/g[7]
  • Bluetooth[7]
  • 16GB storage (12 GB available)[7]
  • Texas Instruments OMAP 4430 SoC 1.2Ghz Dual(ARMv7)[6]
  • 682MB RAM “proc”.
  • 3 axis gyroscope [52]
  • 3 axis accelerometer [52]
  • 3 axis magnetometer (compass)[52]
  • Ambient light sensing and proximity sensor [52]
  • Bone conduction transducer[7]

images (2)

Software[edit]

Applications[edit]

Google Glass applications are free applications built by third-party developers. Glass also uses many existing Google applications, such as Google NowGoogle MapsGoogle+, and Gmail.

Third-party applications announced at South by Southwest (SXSW) include EvernoteSkitchThe New York Times, and Path.[53]

On April 15, 2013, Google released the Mirror API, allowing developers to start making apps for Glass.[54][55] In the terms of service, it is stated that developers may not put ads in their apps or charge fees;[56] a Google representative told The Verge that this might change in the future.[57]

Many developers and companies have built applications for Glass, including news apps, facial recognition, exercise, photo manipulation, translation, and sharing to social networks, such asFacebook and Twitter.[58][59][60]

On May 16, 2013, Google announced the release of seven new apps, including reminders from Evernote, fashion news from Elle, and news alerts from CNN.[61] Following Google’s XE7 Glass Explorer Edition update in early July 2013, evidence of a “Glass Boutique”, a store that will allow synchronization to Glass of Glassware and APKs, was noted.[49]

Version XE8 made a debut for Google Glass on August 12, 2013. It brings an integrated video player with playback controls, the ability to post an update to Path, and lets users save notes to Evernote. Several other minute improvements include volume controls, improved voice recognition, and several new Google Now cards.

On November 19, 2013, Google unveiled its Glass Development Kit, showcasing a translation app Word Lens, a cooking app AllTheCooks, and an exercise app Strava among others as successful examples.[62][63]

MyGlass[edit]

Google offers a companion Android and iOS app called MyGlass, which allows you to configure and manage your device.[64]

Voice activation[edit]

Other than the touchpad, Google Glass can be controlled using “voice actions”. To activate Glass, wearers tilt their heads 30° upward (which can be altered for preference) or tap the touchpad, and say “O.K., Glass.” Once Glass is activated, wearers can say an action, such as “Take a picture”, “Record a video”, “Hangout with [person/Google+ circle]”, “Google ‘What year was Wikipedia founded?'”, “Give me directions to the Eiffel Tower”, and “Send a message to John”[65] (many of these commands can be seen in a product video released in February 2013).[38] For search results that are read back to the user, the voice response is relayed using bone conduction through a transducer that sits beside the ear, thereby rendering the sound almost inaudible to other people.[66]

Reception[edit]

Critical reception[edit]

In November 2012, Glass received recognition by Time Magazine as one of the “Best Inventions of the Year 2012”, alongside inventions such as the Curiosity Rover.[67]

After a visit to the University of Cambridge by Google’s chairman Eric Schmidt in February 2013, Wolfson College professor[68] John Naughton praised the Glass and compared it with the achievements of hardware and networking pioneer Douglas Engelbart. Naughton wrote that Engelbart believed that machines “should do what machines do best, thereby freeing up humans to do what they do best”.[69]

Lisa A. Goldstein, a freelance journalist who was born profoundly deaf, tested the product on behalf of people with disabilities and published a review on August 6, 2013. In her review, Goldstein states that Google Glass does not accommodate hearing aids and is not suitable for people who cannot understand speech. Goldstein also explained the limited options for customer support, as telephone contact was her only means of communication.[70]

In December 2013, David Datuna became the first artist to incorporate Google Glass into a contemporary work of art.[71][72] The artwork debuted at a private event at The New World Symphony in Miami Beach, Florida, US and was moved to the Miami Design District for the public debut.[73] Over 1500 people used Google Glass to experience Datuna’s American flag from his “Viewpoint of Billions” series.[74]

Privacy concerns[edit]

Steve Mann, inventor of EyeTap, wearing several developments of his device which has been compared with Google Glass[75]

The eyewear’s functionality and minimalist appearance have been compared to Steve Mann‘s EyeTap,[75] also known as “Glass” or “Digital Eye Glass”, although Google Glass is a “Generation-1 Glass” compared to EyeTap, which is a “Generation-4 Glass”.[76] According to Mann, both devices affect both privacy and secrecy by introducing a two-sided surveillance and sousveillance.[77]

Concerns have been raised by various sources regarding the intrusion of privacy, and the etiquette and ethics of using the device in public and recording people without their permission,[78][79][80] even though many artists practicing street photography or life reportage, including Henri Cartier-Bresson, have made history[peacock term] taking pictures of people in public without their consent or knowledge[citation needed], and today there are web services such as Google Street View doing the same on a massive scale[citation needed]. There is controversy that Google Glass would violate privacy rights due to security problems and others.[81][82][83]

Privacy advocates are concerned that people wearing such eyewear may be able to identify strangers in public using facial recognition, or surreptitiously record and broadcast private conversations.[1] Some companies in the U.S. have posted anti-Google Glass signs in their establishments.[84][85] In July 2013, prior to the official release of the product, Stephen Balaban, co-founder of software company Lambda Labs, circumvented Google’s facial recognition app block by building his own, non-Google-approved operating system. Balaban then installed face-scanning Glassware that creates a summary of commonalities shared by the scanned person and the Glass wearer, such as mutual friends and interests.[86] Additionally, Michael DiGiovanni created Winky, a program that allows a Google Glass user to take a photo with a wink of an eye, while Marc Rogers, a principal security researcher at Lookout, discovered that Glass can be hijacked if a user could be tricked into taking a picture of a malicious QR code.[87]

Other concerns have been raised regarding legality of the Glass in a number of countries, particularly in Russia, Ukraine, and other post-USSR countries. In February 2013, a Google+ user noticed legal issues with Glass and posted in the Glass Explorers community about the issues, stating that the device may be illegal to use according to the current legislation in Russia and Ukraine, which prohibits use of spy gadgets that can record video, audio or take photographs in an inconspicuous manner.[88]

Concerns were also raised in regard to the privacy and security of Glass users in the event that the device is stolen or lost, an issue that was raised by a US congressional committee. As part of its response to the governmental committee, Google stated in early July that is working on a locking system and raised awareness of the ability of users to remotely reset Glass from the web interface in the event of loss.[49]

Several facilities have banned the use of Google Glass before its release to the general public, citing concerns over potential privacy-violating capabilities. Other facilities, such as Las Vegascasinos, banned Google Glass, citing their desire to comply with Nevada state law and common gaming regulations which ban the use of recording devices near gambling areas.[89]

Safety considerations[edit]

Concerns have also been raised on operating motor vehicles while wearing the device. On 31 July 2013 it was reported that driving while wearing Google Glass is likely to be banned in the UK, being deemed careless driving, therefore a fixed penalty offense, following a decision by the Department for Transport.[90]

In the US, West Virginia state representative Gary G. Howell introduced an amendment in March 2013 to the state’s law against texting while driving that would include bans against “using a wearable computer with head mounted display.” In an interview, Howell stated, “The primary thing is a safety concern, it [the glass headset] could project text or video into your field of vision. I think there’s a lot of potential for distraction.”[91]

In October 2013, a driver in California was ticketed for “driving with monitor visible to driver (Google Glass)” after being pulled over for speeding by a San Diego Police Department officer. The driver was reportedly the first to be ticketed for driving while wearing a Google Glass.[92]

Terms of service[edit]

Under the Google Glass terms of service for the Glass Explorer pre-public release program, it specifically states, “you may not resell, loan, transfer, or give your device to any other person. If you resell, loan, transfer, or give your device to any other person without Google’s authorization, Google reserves the right to deactivate the device, and neither you nor the unauthorized person using the device will be entitled to any refund, product support, or product warranty.” Wired commented on this policy of a company claiming ownership of its product after it had been sold, saying: “Welcome to the New World, one in which companies are retaining control of their products even after consumers purchase them.”[93] Others pointed out that Glass was not for public sale at all, but rather in private testing for selected developers, and that not allowing developers in a closed beta to sell to the public is not the same as banning consumers from reselling a publicly released device.[94]

Several proof of concepts for Google Glass have been proposed in healthcare:

In July 2013, Lucien Engelen commenced research on the usability and impact of Google Glass in the health care field. As of August 2013, Engelen, who is based at Singularity University and in Europe at Radboud University Medical Center,[95] is the first healthcare professional in Europe to participate in the Glass Explorer program.[96] His research on Google Glass (starting August 9, 2013) was conducted in operating rooms, ambulances, a trauma helicopter, general practice, and home care as well as the use in public transportation for visually or physically impaired. Research contained making pictures, videos streaming to other locations dictating operative log, having students watch the procedures and tele-consultation through Hangout. Engelen documented his findings in blogs,[97] videos,[98] pictures, on Twitter,[99] and on Google+.[100] and is still ongoing.

Key findings of his research included:

  1. The quality of pictures and video are usable for healthcare education, reference, and remote consultation.The camera needs to be tilted to different angle[101] for most of the operative procedures
  2. Tele-consultation is possible—depending on the available bandwidth—during operative procedures.[102]
  3. A stabilizer should be added to the video function to prevent choppy transmission when a surgeon looks to screens or colleagues.
  4. Battery life can be easily extended with the use of an external battery.
  5. Controlling the device and/or programs from another device is needed for some features because of sterile environment.
  6. Text-to-speech (“Take a Note” to Evernote) exhibited a correction rate of 60 percent, without the addition of a medical thesaurus.
  7. A protocol or checklist displayed on the screen of Glass can be helpful during procedures.[citation needed]

Dr. Phil Haslam and Dr. Sebastian Mafeld demonstrated the first concepts for Google Glass in the field of interventional radiology. They demonstrated the manner in which the concept of Google Glass could assist a liver biopsy and fistulaplasty, and the pair stated that Google Glass has the potential to improve patient safety, operator comfort, and procedure efficiency in the field of interventional radiology.[103]

In June 2013, surgeon Dr. Rafael Grossmann was the first person to integrate Google Glass into the operating theater, when he wore the device during a PEG (percutaneous endoscopic gastrostomy) procedure.[104] In August 2013, Google Glass was also used at Wexner Medical Center at Ohio State University. Surgeon Dr. Christopher Kaeding used Google Glass to consult with a colleague in a distant part of Columbus, Ohio. A group of students at The Ohio State University College of Medicine also observed the operation on their laptop computers. Following the procedure, Kaeding stated, “To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly.”[105]

In January 2014,An Indian Orthopedic Surgeon Selen G. Parekh conducted the foot and ankle surgery using Google Glass in Jaipur, which was broadcasted live on Google website via internet.The surgery was held during a three day annual Indo-US conference attended by a team of experts from the US, and headed by Dr Ashish Sharma. Sharma said Google Glass allows looking at an X-Ray or MRI without taking the eye off from the patient, and allows a doctor to communicate with a patient’s family or friends during a procedure. “The image which the doctor sees through Google Glass will be broadcasted on the internet. It’s an amazing technology. Earlier, during surgeries, to show something to another doctor, we had to keep moving and the cameraman had to move as well to take different angles. During this, there are chances of infection. So in this technology, the image seen by the doctor using Google Glass will be seen by everyone throughout the world,” he said.

Telecommunication is communication at a distance by technological means, particularly through electrical signals or electromagnetic waves.[1][2][3][4][5][6] Due to the many different technologies involved, the word is often used in a plural form, as telecommunications.

220px-Erdfunkstelle_Raisting_2

Early telecommunication technologies included visual signals, such as beaconssmoke signalssemaphore telegraphssignal flags, and opticalheliographs.[7] Other examples of pre-modern telecommunications include audio messages such as coded drumbeats, lung-blown horns, and loud whistles. Electrical and electromagnetic telecommunication technologies include telegraphtelephone, and teleprinternetworksradiomicrowave transmissionfiber opticscommunications satellites and the Internet.

A revolution in wireless telecommunications began in the 1900s with pioneering developments in radio communications by Guglielmo Marconi. Marconi won the Nobel Prize in Physics in 1909 for his efforts. Other highly notable pioneering inventors and developers in the field of electrical and electronic telecommunications include Charles Wheatstone and Samuel Morse (telegraph), Alexander Graham Bell (telephone), Edwin Armstrong, and Lee de Forest (radio), as well as John Logie Baird and Philo Farnsworth (television).

The world’s effective capacity to exchange information through two-way telecommunication networks grew from 281 petabytes of (optimally compressed) information in 1986, to 471 petabytes in 1993, to 2.2 (optimally compressed) exabytes in 2000, and to 65 (optimally compressed) exabytes in 2007.[8] This is the informational equivalent of two newspaper pages per person per day in 1986, and six entire newspapers per person

Society and telecommunication[edit]

Telecommunication has a significant social, cultural and economic impact on modern society. In 2008, estimates placed the telecommunication industry’s revenue at $4.7 trillionor just under 3 percent of the gross world product (official exchange rate).[10] Several following sections discuss the impact of telecommunication on society.

images (1)

Economic impact[edit]

Microeconomics[edit]

On the microeconomic scale, companies have used telecommunications to help build global business empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Wal-Mart has benefited from better telecommunication infrastructure compared to its competitors.[45] In cities throughout the world, home owners use their telephones to order and arrange a variety of home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh‘s Narshingdi district, isolated villagers use cellular phones to speak directly to wholesalers and arrange a better price for their goods. InCôte d’Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price.[46]

Macroeconomics[edit]

On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth.[47] Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal.[48]

Because of the economic benefits of good telecommunication infrastructure, there is increasing worry about the inequitable access to telecommunication services amongst various countries of the world—this is known as the digital divide. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly a third of countries have fewer than one mobile subscription for every 20 people and one-third of countries have fewer than one land-line telephone subscription for every 20 people. In terms of Internet access, roughly half of all countries have fewer than one out of 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies.[49] Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Nigeria, Burkina Faso and Mali received the lowest.[50]

Social impact[edit]

images

Telecommunication has played a significant role in social relationships. Nevertheless devices like the telephone system were originally advertised with an emphasis on the practical dimensions of the device (such as the ability to conduct business or order home services) as opposed to the social dimensions. It was not until the late 1920s and 1930s that the social dimensions of the device became a prominent theme in telephone advertisements. New promotions started appealing to consumers’ emotions, stressing the importance of social conversations and staying connected to family and friends.[51]

Since then the role that telecommunications has played in social relations has become increasingly important. In recent years, the popularity of social networking sites has increased dramatically. These sites allow users to communicate with each other as well as post photographs, events and profiles for others to see. The profiles can list a person’s age, interests, sexual preference and relationship status. In this way, these sites can play important role in everything from organising social engagements to courtship.[52]

Prior to social networking sites, technologies like short message service (SMS) and the telephone also had a significant impact on social interactions. In 2000, market research group Ipsos MORI reported that 81% of 15 to 24 year-old SMS users in the United Kingdom had used the service to coordinate social arrangements and 42% to flirt.[53]

per day by 2007.[9] Given this growth, telecommunications play an increasingly important role in the world economy and the global telecommunications industry was about a $4.7 trillion sector in 2012.[10][11] The service revenue of the global telecommunications industry was estimated to be $1.5 trillion in 2010, corresponding to 2.4% of the world’s gross domestic product (GDP