Optical
Disc
incomputing and optical disc
recording technologies, an optical disc (OD) is a flat, usually circular disc
which encodes binary data (bits) in the form of pits(binary value of 0 or Off,
due to lack of reflection when read) and lands (binary value of 1 or on, due to
a reflection when read) on a special material (oftenaluminium) on one of its
flat surfaces. The encoding material sits atop-a thicker substrate (usually
polycarbonate) which makes up the bulk of the disc and forms a dust defocusing
layer. The encoding pattern follows a continuous, spiral path covering the
entire disc surface and extending from the innermost track to the outermost
track. The data is stored on the disc with a laser or stamping machine, and can
be accessed when the data path is illuminated with a laser diode in an optical
disc drive which spins the ditc at speeds of about 200 to 4,000 RPM or more,
depending on the drive type, disc format, and the distance of the read head
from the center of the disc (inner tracks are read at a faster disc speed). The
pits or bumps distort the reflected laser light, hence most optical discs
(except the black discs of the original PlayStation video game console)
characteristically have an iridescentappearance created by the grooves of the reflective
layer. The reverse side of an optical disc usually has a printed label,
sometimes made of paper but often printed or stamped onto the disc itself. This
side of the disc contains the actual data and is typically coated with a
transparent material, usually lacquer. Unlike the 31/2-inch floppy disk, most
optical discs do not have an integrated protective casing and are therefore
susceptible to data transfer problems due to scratches, fingerprints, and other
environmental problems.
Optical discs are usually between 7.6 and 30
cm (3 to 12 in) in diameter, with 12 cm (4.75 in) being the most common size. A
typical disc is about 1.2 mm (0.05 in) thick, while the track pitch (distance
from the center of one track to the center of the next) is typically 1.6 urn.
An optical disc is designed to
support one of three recording types: read-only (e.g.: CD and CD-ROM),
recordable (write-once, e.g. CD-R), or re-recordable (rewritable, e.g. CD-RW).
Write-once optical disc. cornmonly have an organic dye recording layer between
the substrate and the reflective layer. Rewritable discs typically contain an
alloy recording layer composed of a phase change material, most often AgInSbTe,
an alloy of silver, indium, antimony, andtellurium.
Optical discs are most commonly used
for storing music (e.g. for use in a CD player), video (e.g. for use in a
Blu-ray player), or data and programs forpersonal computers (PC). The Optical
Storage Technology Association (OSTA) promotes standardized optical storage
formats. Although optical discs are more durable than earlier audio-visual and
data storage formats, they are susceptible to environmental and daily-use
damage. Libraries and archives enactoptical media preservation procedures to
ensure continued usability in the computer's optical disc drive or
corresponding disc player.
For computer data backup and
physical data transfer, optical discs such as CDs and DVDs are gradually being
replaced with faster, smaller, and more reliable solid-state devices,
especially the USB flash drive. This trend is expected to continue as USB flash
drives continue to increase in capacity and drop in price. Similarly, personal
portable CD players have been supplanted by portable solid-state digital audio
player (MP3 players), and MP3 music purchased or shared over the Internet has
significantly reduced the number of audio CDs sold annually.
History
The optical disc was invented in
1958. In 1961 and 1969, David Paul Gregg registered a patent for the analog
optical disc for video recording. This form of optical disc was a very early
form of the DVD U.S. Patent 3,430,966. It is of special interest that U.S.
Patent 4,893,297, filed 1989, issued 1990, generated royalty income for Pioneer
Corporation's DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems.
In the early 1960s, the Music Corporation of America bought Gregg's patents and
his company, Gauss Electrophysics.
Later, in the Netherlands in
1969, Philips Research physicists began their first optical videodisc
experiments at Eindhoven. In 1975, Philips and MCA began to work together, and
in 1978, commercially much too late, they presented their long-awaited
Laserdisc in Atlanta. MCA delivered the discs and Philips the players. However,
the presentation was a technical and commercial failure and the Philips/MCA
cooperation ended.
In Japan and the U.S., Pioneer
succeeded with the videodisc until the advent of the DVD. In 1979, Philips and
Sony, in consortium, successfully developed the audio compact disc.
In the mid-1990s, a consortium of
manufacturers developed the second generation of the optical disc, the DVD.
Magnetic disks found limited
applications in storing the data in large amount. So,there was the need of
finding some more data storing techniques. As a result, it was found that by
using optical means large data storing devices can be made which in turn gave
rise to the optical discs.The very first application of this kind was the
Compact Disc(CD) which was used in audio systems.
Sony and Philips developed the first
generation of the CDs in the mid 1980s with the complete specifications for
these clevices.With the help of this kind of technology the possibility of
representing the analog signal into digital signal was exploited to great
level.For this purpose the 16 bit samples of the analog signal were taken at
the rate of 44,100 samples per second*hich was obviously following the Nyquist
Criteria .The design of first version of the CD's was to hold up to 75 minutes
of music which was requiring 3GB of
storage.
The third generation optical disc was
developed in 2000-2006, and was introduced as Blu-ray Disc. First movies on
Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a
high definition optical disc format war over a competing format, the HD DVD. A
standard Blu-ray disc can hold about 25 GB of data, a DVD about 4.7 GB, and a
CD about 700 MB.
First-generation
Initially, optical discs were
used to store music and computer software. The Laserdisc format stored analog
video signals for the distribution of home video, but commercially lost to the
VHS videocassette format, due mainly to its high cost and non-re-recordability;
other first-generation disc formats were designed only to store digital data
and were not initially capable of use as a digital video medium.
Most first-generation disc devices had an
infrared laser reading head. The minimum size of the laser spot is proportional
to the wavelengthof the laser, so wavelength is a limiting factor upon the
amount of information that can be stored in a given physical area on the disc.
The infrared range is beyond the long-wavelength end of the visible light
spectrum, so it supports less density than shorter-wavelength visible light.
One example of high-density data storage capacity, achieved with an infrared
laser, is 700 MB of net user data for a 12 cm compact disc.
Other factors that affect data
storage density include: the existence of multiple layers of data on the disc,
the method of rotation (Constant linear velocity (CLV), Constant angular
velocity (CAV), or zoned-CAV), the composition of lands and pits, and how much
margin is unused is at the center and the edge of the disc.
·
Compact Disc (CD) and derivatives
• Video CD (VCD)
• Super Video CD
•
Laserdisc
• GD-ROM
•
Phase-change Dual
• Double Density Compact Disc (DDCD)
• Magneto-optical disc
·
MiniDisc
Second-generation
Second-generation optical discs
were for storing great amounts of data, including broadcast-quality digital
video. Such discs usually are read with a visible-light laser (usually red);
the shorter wavelength and greater numerical aperture allow a narrower light
beam, permitting smaller pits and lands in the disc. In the DVD format, this
allows 4.7 GB storage on a standard 12 cm, single-sided, single-layer disc;
alternatively, smaller media, such as the DataPlay format, can have capacity
comparable to that of the larger, standard compact 12 cm disc.
·
DVD and derivatives
• DVD-Audio
• DualDisc
• Digital Video
Express (DIVX)
• Nintendo optical disc
•
Super Audio CD
• Enhanced Versatile Disc
•
Data Play
·
Universal Media Disc
·
Ultra Density Optical
Third-generation
Third-generation optical discs are in
development, meant for distributing high-definition video and support greater
data storage capacities, accomplished with short-wavelength visible-light
lasers and greater numerical apertures. Blu-ray Disc and HD DVD uses
blue-violet lasers and focusing optics of greater aperture, for use with discs
with smaller pits and lands, thereby greater data storage capacity per layer.
In practice, the effective multimedia presentation capacity is improved with
enhanced video data compression codecs such as H.264/MFEG-4 AVC and VC-1.
• Biu-ray Disc (up to 128 GB (quad-layer))
• HD DVD (discontinued disc
format, up to 51GB triple layer)
• CBHD (a derivative of the
discontinued disc format HD DVD)
• Digital Multilayer Disk
• Fluorescent Multilayer Disc
• Forward Versatile Disc
Fourth-generation
The following formats go beyond the current
third-generation discs and have the potential to hold more than one terabyte (1
T8) of data:
• Holographic Versatile Disc
• LS-R
• Protein-coated disc
Recordable and writable optical discs
There are numerous formats of
optical direct to disk recording devices on the market, all of which are based
on using a laser to change the reflectivity of the digital recording medium in
order to duplicate the effects of the pits and lands created when a commercial
optical disc is pressed. All formats enable reading of computer files as many
times as desired by the user, but writing is a different situation. Some
formats such as CD-R enable writes to be made only once to each sector on the
disk, while other formats CD-RW enable multiple writes to the same sector which
is more like a magnetic recording hard disk drive (HDD). In August 2011, a
company named Millenniata announced a format called the M-DISC which, reverting
to the original technology of optical disks, creates physical pits in a
rock-like layer. The M-Disk is stable up to 500 °C (932 °F), is impervious to
humidity issues, and is engineered to maintain its integrity for 1,000 years without
degradation.
Specifications
Base (1x) and (current) maximum speeds by generation
Generation
Base Max
(Mbit/s)
(Mbitls) x
1st (CD) 1.17
65.6 56x
2nd (DVD)
10.57
253.6 24x
3rd (BD) 36 504
14x
Capacity and nomenclature
Designation Sides Layers (total) Diameter Capacity
(cm) (GB) (GiB)
BD SS
SL 1 1 8 7,8
BD SS DL 1
2 8 15,6
BD SS
SL 1 1 12 25
BD
SS DL 1 2 12 50
BD
SS TL 1
3 12 100
Magnetic storage
Magnetic storage and magnetic
recording are terms from engineering referring to the storage of data on a
magnetized medium. Magnetic storage uses different patterns of magnetization in
a magnetizable material to store data and is a form of non-volatile memory. The
information is accessed using one or more read/write heads. As of 2011,
magnetic storage media, prirnarily hard disks, are widely used to store
computer data as well as audio and video signals. ln the field of computing,
the term magnetic storage is preferred and in the field of audio and video
production, the term magnetic recording is more commonly used. The distinction
is less technical and more a matter of preference. Other examples of magnetic
storage media include floppy disks, magneticrecording tape, and magnetic
stripes on credit cards.
History
Magnetic storage in the form of
audio recording on a wire was publicized by Oberlin Smith in 1888. He filed a
patent in September, 1878 but did not pursue the idea as his business was
machine tools. The first publicly demonstrated (Paris Exposition of 1900)
magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device
recorded a signal on a wire wrapped around a drum. ln 1928, Fritz Pfleumer
developed the first magnetic tape recorder. Early magnetic storage devices were
designed to record analog audio signals. Computer and now most audio and video
magnetic storage devices record digital data.
In old computers, magnetic
storage was also used for primary storage in a form of magnetic drum, or core
memory, core rope memory, thin film memory, twistor memory or bubble memory.
Unlike modern computers, magnetic tape was also often used for secondary
storage.
Design
Information is written to and
read from the storage medium as it moves past devices called read-and-write
heads that operate very close (often tens of nanometers) over the magnetic
surface. The read-and-write head is used to detect and modify the magnetization
of the material immediately under it.
The magnetic surface is conceptually divided
into many small sub-micrometer-sized magnetic regions, referred to as magnetic
domains, (although these are not magnetic domains in a rigorous physical
sense), each of which has a mosfly uniform magnetization. Due to the
polycrystalline nature of the magnetic material each of these magnetic regions
is composed of a few hundred magnetic grains. Magnetic grains are typically 10
nm in size and each form a single true magnetic domain. Each magnetic region in
total forms a magnetic dipole which generates a magnetic field. ln older hard
disk drive (HDD) designs the regions were oriented horizontally and parallel to
the disk surface, but beginning about 2005, the orientation was changed to
perpendicular to allow for closer magnetic domain spacing.
For rellable storage of data, the recording
material needs to resist self-demagnetization, which occurs when the magnetic
dornains repel each other. Magnetic domains vvritten too densely together to a
weakly magnetizable material will degrade over time due to rotation of the
magnetic moment one or more domains to cancel out these forces. The domains
rotate sideways to a halfway position that weakens the readability of the
domain and relieves the magnetic stresses. Older hard disk drives used
iron(111) oxide as the magnetic material, but current disks use a cobalt-based
alloy.
A write head magnetizes a region
by generating a strong local magnetic field, and a read head detects the magnetization
of the regions. Early HDDs used an electromagnet both to magnetize the region
and to then read its magnetic field by using electromagnetic induction. Later
versions of inductive heads included metal in Gap (M1G) heads and thin film
heads. As data density increased, read heads using magnetoresistance (MR) came
into use; the electrical resistance of the head changed according to the
strength of the magnetism from the platter. later development made use of
spintronics; in read heads, the magnetoresistive effect was much greater than
in earlier types, and was dubbed "glant" magnetoresistance (GMR). 1n
today's heads, the read and write elements are separate, but in close
proximity, on the head portion of an actuator arm. The read element is typically
magneto-resistive while the write element is typically thin-film inductive.
The heads are kept from
contacting the platter surface by the air that is extremely close to the
platter; that air moves at or near the platter speed. The record and playback
head are mounted on a block called a slider, and the surface next to the
platter is shaped to keep it just barely out of contact. This forms a type of
air bearing.
Magnetic recording classes
Analog recording
Analog recording is based on the
fact that remnant magnetization of a given material depends on the magnitude of
the applied field. The magnetic material is normally in the form of tape, with
the tape in its blank form being initially demagnetized. When recording, the
tape runs at a constant speed. The writing head magnetizes the tape with
current proportional to the signal. A magnetization distribution is achieved
along the magnetic tape. Finally, the distribution of the magnetization can be
read out, reproducing the original signal The magnetic tape is typically made
by embedding magnetic particles in a plastic binder on polyester film tape. The
commonly used magnetic particles are Iron oxide particles or Chromium oxide and
metal particles with size of 0.5 micrometers.131 Analog recording was very popular
in audio and video recording. ln the past 20 years, however, tape recording has
been gradually replaced by digital recording.
Digitai recording
Instead of creating a
magnetization distribution in analog recording, digital recording only needs
two stable magnetic states, which are the +Ms and -Ms on the hysteresis loop.
Examples of digital recording are floppy disks and HDDs.
Magneto-optical recording
Magneto-optical recording
writes/reads optically. When writing, the magnetic medium is heated locally by
a laser, which induces a rapid decrease of coercive field. Then, a small
magnetic fieid can be used to switch the magnetization. The reading process is
based on magneto-optical Kerr effect. The magnetic medium are typically
amorphous R-FeCo thin film (R being a rare earth element). Magneto-optical
recording is not very popular. One famous example is Minidisc developed by
Sony.
Domain propagahon memory
Domain propagation memory is also called
bubble memory. The basic idea is to control domain wall motion in a magnetic
medium that is free of microstructure. Bubble refers to a stable cylindrical
domain. Data is then recorded by the presence/absence of a bubble domain.
Domain propagation memory has high insensitivity to shock and vibration, so its
application is usually in space and aeronautics.
Technical details
Access method
Magnetic storage media can be
classified as either sequential access memory or random or serial access memory
although in some cases the distinction is not perfectly clear. The access time
can be defined as the average time needed to gain access to stored records. ln
the case of magnetic wire, the read/write head only covers a very small part of
the recording surface at any given time. Accessing different parts of the wire
involves winding the wire forward or backward until the point of interest is
found. The time to access this point depends on how far away it is from the
starting point. The case of ferrite-core memory is the opposite. Every core
location is immediate/y accessible at any given time.
Hard disks and modem linear
serpentine tape drives do not precisely fit into either category. Both have
many paraliel tracks across the width of the media and the read/write heads
take time to switch between tracks and to scan within tracks. Different spots
on the storage media take different amounts of time to access. For a hard disk
this time is typically less than 10 ms, but tapes might take as much as 100 s.
Current usage
As of 2011, common uses of
magnetic storage media are for c,omputer data mass storage on hard disks and
the recording of analog audio and video works on analog tape. Since much of
audio and video production is moving to digital systems, the usage of hard
disks is expected to increase at the expense of analog tape. Digital tape and
tape libraries are popular for the high capacity data storage of archives and
backups. Floppy disks see some marginal usage, particularly in dealing with
older computer systems and software. Magnetic storage is also widely used in
some specific applications, such as bank cheques (MICR) and creditidebit cards
(mag stripes).
Future
A new type of magnetic storage,
called Magnetoresistive Random Access Memory or MRAM, is being produced that
stores data in magnetic bits based on the tunnel magnetoresistance (TMR)
effect. Its advantage is non-volatility, low power usage, and good shock
robustness. The ist generation that was developed was produced by Everspin
Technologies, and utilized field induced writing. The 2nd generation is being
developed through two approaches: Thermal Assisted Switching (TAS) which is
currently being developed by Crocus Technology, and Spin Torque Transfer(STT)
on which Crocus, Hynix, IBM, and several other companies are working. However,
with storage density and capacity orders of magnitude smaller than an HDD, MRAM
is useful in applications where moderate amounts of storage with a need for
very frequent updates are required, which flash memory cannot support due to
its limited write endurance.
Part 2
“basic
software”
OPERATION
SYSTEM
An operating system (0S) is a
collection of software that manages computer hardware resources and provides
common services for computer programs. The operating system is a vital
component of the system software in a computer system. Application programs usually
require an operating system to function.
Time-sharing operating systems
schedule tasks for efficient use of the system and may also include accounting
for cost allocation of processor time, mass storage, printing, and other
resources.
For hardware functions such as
input and output and memory allocation, the operating system acts as an
intermediary between programs and the computer hardware, although the
application code is usually executed directly by the hardware and will
frequently make a system call to an OS function or be interrupted by it.
Operating systerns can be found on almost any device that contains a
computer—from cellular phones and video game consoles to supercomputers and web
servers.
Examples of popular modern
operating systems include Androld, BSD, i0S, Linux, Mac OS X, Microsoft
Windows, Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share
roots in UNIX.
Types of operating systems
Real-time
A real-time operating system is a multitasking
operating system that aims at executing real-time applications. Real-time
operating systems often use specialized scheduling algorithms so that they can
achieve a deterministic nature of behavior. The main objective of real-time
operating systems is their quick and predictable response to events. They have
an event-driven or time-sharing design and often aspects of both. An
event-driven systern switehes between tasks based on their priorities or
external events while time-sharing operating systems switch tasks based on
clock interrupts.
Multi-user
A multi-user operating system
allows multiple users to access a computer system at the same time.
Time-sharing systems and Internet servers can be classified as multi-user
systems as they enable multiple-user access to a computer through the sharing
of time. Single-user operating systems have only one user but may aliow
multiple programs to run at the same time.
Multi-tasking vs. singie-tasking
A multi-tasking operating system allows more
than one prograrn to be running at a time, from the point of view of human time
scales. A single-tasking system has only one running program. Multi-tasking can
be of two types: pre-emptive and co-operative. ln pre-emptive multitasking, the
operating system slices the CPU time and dedicates one slot to each of the
programs. operating systems such as Solaris and Linux support pre-emptive
muititasking, as does Amiga0S. Cooperative multitasking is achieved by relying
on each process to give time to the other processes in a defined manner. 16-bit
versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions
of both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to
OS X used to support cooperative multitasking.
Distributed
A distributed operating system
manages a group of independent computers and makes them appear to be a single
computer. The development of networked computers that could be linked and
communicate with each other gave rise to distributed computing. Distributed
computations are carried out on more than one machine. When computers in a
group work in cooperation, they make a distributed system.
Embedded
Ennbedded operating systems are
designed to be used in embedded computer systems. They are designed to operate
on small machines like PDAs with less autonomy. They are able to operate with a
limited number of resources. They are very compact and extremely efficient by
design. Windows CE and Minix 3 are some examples of embedded operating systems.
History
Early computers were bullt to perform a series
of single tasks, like a calculator. Operating systems did not exist in their
modern and more complex forms until the early 1960s. Basic operating system
features were developed in the 1950s, such as resident monitor funetions that
could automatically run different programs in succession to speed up
processing. Hardware features were added that enabled use of runtime libraries,
interrupts, and parallel processing. When personal computers became popular in
the 1980s, operating system were made for them similar in concept to those used
on larger computers.
In the 1940s, the earliest electronic digital
systems had no operating systems. Electronic systems of this time were
programmed on rows of mechanical switches or by jumper wires on plug boards.
These were special-purpose systems that, for example, generated ballistics
tables for the military or controlled the printing of payroll checks from data
on punched paper cards. After programrnable general purpose computers were
invented, machine languages (consisting of strings of the binary digits 0 and 1
on punched paper tape) were introduced that sped up the programming process
(Stern, 1981).
In the early 1950s, a computer
could execute only one program at a time. Each user had sole use of the
computer for a limited period of time and would arrive at a scheduled time with
program and data on punched paper cards and/or punched tape. The program would
be loaded into the machine, and the machine would be set to work until the
program completed or crashed. Programs could generally be debugged via a front
panel using toggle switches and panel lights. lt is said that Aian Turing was a
master of this on the early Manchester Mark 1 machine, and he was already
deriving the primitive conception of an operating system from the principles of
the Universal Turing machine.
Later machines came with
libraries of programs, which would be linked to a user's program to assist in
operations such as input and output and generating computer code from
human-readable syrnbolic code. This was the genesis of the modern-day computer
system. However, machines still ran a single job at a time. At Cambridge
University in England the job queue was at one time a washing line from which
tapes were hung with different colored clothes-pegs to indicate job-priority.
Exampies of operating systems
UNIX and unix-like operating
systems
Unix was originally written in assembly
language. Ken Thompson wrote B, mainly based on BCPL, based on his experience
in the MULTICSproject. B was replaced by C, and Unix, rewriten in C, developed
into a large, complex family of inter-related operating systems which have been
influential in every modern operating system (see History).
The UNIX-Iike family is a diverse
group of operating systems, with several major sub-categories including System
V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group
which licenses it for use with any operating system that has been shown to
conform to their definitions. "UNIX-like"is commonly used to refer to
the large set of operating systems which resemble the original UNIX.
Unix-like systems run on a wide varlety of
computer architectures. They are used heavily for servers in business, as well
as workstations in academic and engineering environments. Free UNIX variants,
such as Linux and BSD, are popular in these areas.
Four operating systems are
certified by the The Open Group (holder of the Unix trademark) as Unix. HP's
HP-UX and IBM's AIX are both descendants of the original System V Unix and are
designed to run only on their respective vendor's hardware. In contrast, Sun
Microsystems'sSolaris Operating System can run on multiple types of hardware,
including x86 and Sparc servers, and PCs. Apple's Mac OS X, a replacement for
Appie's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSO variant derived
from NeXTSTEP, Mach, and FreeBSD.
Unix interoperability was sought
by establishing the POS1X standard. The POSIX standard can be applied to any
operating system, although it was originally created for various Unix variants.
Microsoft Windows
Microsoft Windows is a family of
proprietary operating systems designed by Microsoft Corporation and primarily
targeted to Intel architecture based computers, with an estimated 88.9 percent
total usage share on Web connected computers. The newest version is Windows 8
for workstations and Windows Server 2012 for servers. Windows 7 recently
overtook Windows XP as most used OS.
Microsoft Windows originated in
1985 as a operating environment running on top of MS-DOS, which was the
standard operating system shipped on most Intel architecture personal computers
at the time. ln 1995, Windows 95 was released which only used MS-DOS as a
bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS and 16
bits Windows 3.x drivers. Windows Me, released in 2000, was the last version in
the Win9x family. Later versions have all been based on the Windows NT kernel.
Current versions of Windows run on 1A-32 and x86-64microprocessors, aithough
Windows 8 wiii support ARM architecture. In the past, Windows NT supported
non-intel architectures.
Server editions of Windows are
widely used. in recent years, Microsoft has expended significant capital in an
effort to promote the use of Windows as a server operating system. However,
Windows usage on servers is not as widespread as on personal computers, as
Windows competes against Linux and BSD for server market share.
Other
There have been many operating
systems that were significant in their day but are no longer so, such as
Amiga0S; OS/2 from IBM and Microsoft;Mac OS, the non-Unix precursor to Apple's
Mac OS X; Be0S; XTS-300; RISC OS; MorphOS and FreeMint. Sorne are still used in
niche markets and continue to be developed as minority platforms for enthusiast
communities and specialist applications. OpenVMS formerly from DEC, is still
under active development by Hewlett-Packard. Yet other operating systems are
used almost exclusively in academia, for operating systems education or to do
research on operating system concepts. A typical example of a system that
fulfills both roles is MINIX, while for exampleSingularity is used purely for
research.
Other operating systems have falled to win
significant market share, but have introduced innovations that have influenced
mainstream operating systems, not least Bell Labs' Plan 9.
Components
The components of an operating system all
exist in order to make the different parts of a computer work together. All
user software needs to go through the operating system in order to use any of
the hardware, whether it be as simple as a mouse or keyboard or as complex as
an Internet component.
Kernel
With the aid of the firmware and
device drivers, the kernel provides the most basic level of control over all of
the computer's hardware devices. It manages memory access for programs in the
RAM, it determines which programs get access to which hardware resources, it
sets up or resets the CPU's operating states for optimal operation at all
times, and it organizes the data for long-term non-volatile storage with file
systems on such media as disks, tapes, flash memory, etc.
Networking
Currently most operating systems support a
variety of networking protocols, hardware, and applications for using them.
This means that computers running dissimilar operating systems can participate
in a common network for sharing resources such as computing, files, printers,
and scanners using either wired or wireless connections. Networks can
essentially allow a computer's operating system to access the resources of a
remote computer to support the same functions as it could if those resources
were connected directly to the local computer. This includes everything from
simple communication, to using networked file systems or even sharing another
computer's graphics or sound hardware. Some network services allow the
resources of a computer to be accessed transparently, such as SSH which allows
networked users direct access to a computer's command line interface.
Client/server networking allows a program on a
computer, called a client, to connect via a network to another computer, called
a server. Servers offer (or host) various services to other network computers
and users. These services are usually provided through ports or numbered access
points beyond the server's network address. Each port number is usually
associated with a maximum of one running program, which is responsible for
handling requests to that port. A daemon, being a user program, can in turn
access the local hardware resources of that computer by passing requests to the
operating system kernel.
Many operating systems support
one or more vendor-specific or open networking protocols as well, for example,
SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and
Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific
tasks may atso be supported such as NFS for file access. Protocols like ESound,
or esd can be easily extended over the network to provide sound from local
applications, on a remote system's sound hardware.
Security
A computer being secure depends
on a number of technologies working properly. A modern operating system
provides access to a number of resources, which are available to software
running on the system, and to external devices like networks via the kernel.
The operating systern must be
capable of distinguishing between requests which should be allowed to be
processed, and others which should not be processed. While some systems may
simply distinguish between "privileged" and
"non-privileged", systems commonly have a form of requester identity,
such as a user name. To establish identity there may be a process of
authentication. Often a username must be quoted, and each username may have a
password. Other methods of authentication, such as magnetic cards or biometric
data, might be used instead. In some cases, especially connections from the
network, resources may be accessed with no authentication at all (such as
reading files over a network share). Also covered by the concept of requester
identityis authorization; the particular services and resources accessible by
the requester once logged into a system are tied to either the requester's user
account or to the variously configured groups of users to which the requester
belongs.
In addition to the allow/disallow
model of security, a system with a high level of security will also offer
auditing options. These would allow tracking of requests for access to
resources (such as, "who has been reading this file?"). Internal
security, or security from an already running program is onIy possible if all
possibly harmful requests must be carried out through interrupts
to the operating system kernel. If programs can directly access hardware and
resources, they cannot be secured.
External security involves a request from
outside the computer, such as a login at a connected console or some kind of
network connection. External requests are often passed through device drivers
to the operating system's kernel, where they can be passed onto applications,
or carried out directly. Security of operating systems has long been a concern
because of highly sensitive data held on computers, both of a commercial and
military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets
basic requirements for assessing the effectiveness of
security. This became of vital importance to operating system makers, because
the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval
of sensitive or classified information.
Network services include offerings such as file sharing, print
services, email, web sites, and file transfer protocols
(FTP), most of which can have compromised security. At the front line of
security are hardware devices known as firewalls or intrusion
detection/prevention systems. At the operating system level, there are a number
of software firewalis available, as well as intrusion detection/prevention
systems. Most modern operating systems include a software firewall, which is
enabled by default. A software firewall can be configured to allow or deny
network traffic to or from a service or application running on the operating system. Therefore, one can install and be
running an insecure service, such as Telnet or FTP, and not have to be
threatened by a security breach because the firewall would deny all traffic
trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available
in systems that do not meet the Popek and Goldberg virtualization requirements,
is the operating system not running user programs as native code, but instead either emulates a processor or
provides a host for a p-code based system such as Java.
Internal security is especially
relevant for multi-user systems; it allows each user of the system to have
private files that the other users cannot tamper with or read. Internal
security is also vital if auditing is to be of any use, since a program can potentially
bypass the operating system, inclusive of bypassing auditing.
FACE OF
INTERNET
The Internet (or Internet) is a global system of interconnected
computer networks that use the standard Internet protocol suite (often called TCP/IP, although not all
applications use TCP) to serve billions of users
worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of
local to global scope, that are linked by a broad array of electronic,
wireless and optical networking technologies. The Internet carries an extensive
range of information resources and services, such as the inter-linked hypertext
documents of the World Wide Web (WWW) and the infrastructure to support email.
Most
traditional communications media including telephone, music, film, and
television are being reshaped or redefined by the Internet, giving birth to new
services such as Voice over Internet
Protocol (VolP) and Internet Protocol
Television (IPIV). Newspaper, book and other print publishing are adapting toWeb site technology, or are reshaped
into blogging and web feeds. The Internet has enabled and accelerated
new forms of human interactions throughinstant messaging, Internet forums, and
social networking. Online shopping has
boomed both for major retail outlets and small artisans and traders. Business-to-business and financial
services on the Internet affect supply chains across entire Industries.
The origins of the Internet reach back
to research of the 1960s, commissioned by the United States government to build
robust, fault-tolerant, and distributed computer networks. The funding of a new
U.S. backbone by the National Science Foundation in the 1980s, as well as
private funding for other commercial backbones, led to worldwide participation
in the development of new networking technologies, and the merger of many
networks. Thecommercialization of what was by the 1990s an international
network resulted in its popularization and incorporation into virtually every
aspect of modern human life. As of June 2012, more than 2.4 billion
people—nearly a third of the world's human population—have used the services of
the Internet.
The Internet has no centralized
governance in either technological implementation or policies for access and usage; each
constituent network sets its own standards. Only the overreaching definitions
of the two principal name spaces in the Internet, the Internet Protocol address
space and the Domain Name System, are directed by a maintainer organization,
the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the
core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task
Force (IETF), a non-profit organization of loosely affiliated international
participants that anyone may associate with by contributing technical
expertise.
History
Research into packet switching started
in the early 1960s and packet switched networks such as Mark I at NPL in the
UK, ARPANET, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in
the late 1960s and early 1970s using a variety of protocols. The ARPANET in
particular led to the development of protocols for
internetworking, where multiple separate networks could be joined together into
a network of networks thanks to the work of Britishscientist Donald Davies
whose ground-breaking work on Packet Switching was essential to the system.
The first two nodes of what would become the
ARPANET were interconnected between Leonard Kleinrock's Network Measurement
Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS
system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the
ARPANET was the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was
the University of Utah Graphics Department.
In an early sign of future growth, there were already fifteen sites connected
to the young ARPANET by the end of 1971. These early years were documented in
the 1972 film Computer Networks: The Heralds of Resource Sharing.
Early international collaborations on
ARPANET were sparse. For various
political reasons, European developers were concerned with developing the
X.25networks. Notable exceptions were the Norwegian
Seismic Array (NORSAR) in June 1973, followed in
1973 by Sweden with satellite links to the TanumEarth Station and Peter T. Kirstein's research group in
the UK, initially at the Institute of Computer Science,
University of London and later at University College London.
In December 1974, RFC 675– Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalai, and Carl Sunshine, used the term internet, as a shorthand for internetworking;
later RFCs
repeat this use, so the word started out as an adjectiverather than the noun it
is today. Access to the ARPANET was expanded in 1981
when the National Science Foundation (NSF) developed theComputer Science Network (CSNET). In 1982, the Internet Protocol
Suite (TCP/IP) was standardized and the concept of a world-wide network
of fully interconnected TCP/IP networks called the Internet was introduced.
TCP/IP network access expanded again in 1986
when the National Science Foundation Network (NSFNET)
provided access to supercomputer sites in the United States from research and
education organizations, first at 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. Commercial interne
service providers
(ISPs) began to emerge in the late 1980s and early 1990s. The ARPANET was
decommissioned in 1990.
The Internet was commercialized in 1995 when NSFNET was decommissioned,
removing the last restrictions on the use of the
Internet to carry commercial traffic. The Internet started a rapid expansion to
Europe and Australia in the mid to late 1980s and to Asia in the late 1980s and
early 1990s.
Since the mid-1990s the Internet has had
a tremendous impact on culture and commerce, including the rise of near instant
communication by email,instant messaging, Voice over Internet Protocol (VolP) "phone calls", two-way interactive video
calls, and the World Wide Web with its discussion forums, blogs, social
networking, and online shopping sites. Increasing amounts of data are
transmitted at higher and higher speeds over fiber optic networks operating at
1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever
greater amounts of online information and knowledge, commerce, entertainment
and social networking.
During the late 1990s, it was
estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the
number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central
administration, which allows organic growth of the network, as well as the
non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one
company from exerting too much control over the network. As of 31 March 2011, the estimated total number of Internet
users was 2.095 billion (30.2% of world population). It is estimated that
in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this
figure had grown to 51%, and by 2007 more than 97% of all
telecommunicated information was carried over the Internet.
Technology
Protocols
The communications infrastructure of the
Internet consists of its hardware components and a system of software layers
that control various aspects of the architecture. While the hardware can often
be used to support other software systems, it is the design and the rigorous
standardization process of the software architecture that characterizes the
Internet and provides the foundation for its scalability and success. The
responsibility for the architectural design of the Internet software systems
has been delegated to the Internet
Engineering Task Force (IETF), The IETF conducts standard-setting work groups,
open to any individual, about the various aspects of Internet architecture.
Resulting discussions and final standards
are published in a series of publications, each called a Request for Comments
(RFC), freely available on the IETF web site. The principal methods of
networking that enable the Internet are contained in specially
designated RFCs that constitute the Internet Standards. Other less rigorous documents
are simply informative, experimental, or historical, or document the best
current practices (BCP) when implementing Internet technologies.
The
Internet standards describe a framework known as the Internet protocol suite.
This is a model architecture that divides methods into a layered system of
protocols (RFC 1122, RFC 1123). The layers correspond
to the environment or scope in which their services operate. At the top is the
application layer, the space for the application-specific networking methods
used in software applications, e.g., a web
browser program. Below this top layer, the transport layer connects
applications on different
hosts via
the network (e.g., client—server model) with appropriate data exchange methods.
Underlying these layers are the core networking technologies, consisting of two
layers. The internet layer enables computers to identify and locate each other
via Internet Protocol (IP) addresses, and allows them to connect to one another
via intermediate (transit) networks. Last, at the bottom of the architecture,
is a software layer, the link layer, that provides connectivity between hosts
on the same local network link, such as a
local area network (LAN) or a dial-up connection. The model, also known as
TCP/IP, is designed to be independent of the underlying
hardware, which the model therefore does not concern itself with in any detail. Other models have been
developed, such as the Open Systems Interconnection (051) model, but they are not compatible in the details of description or
implementation; many similarities exist and the TCP/IP protocols are usually
included in the discussion of OS! networking.
The most prominent component of the
Internet model is the Internet Protocol (IF'), which provides addressing
systems (lP addresses) for computers on the Internet. IP enables
internetworking and in essence establishes
the Internet itself. IF' Version 4 (IPv4) is the initial version used on the
first generation of today's Internet and is still in dominant use. It
was designed to address up to —4.3 billion (109) Internet hosts. However, the explosive growth of the
Internet has led to IPv4 address exhaustion, which entered its final
stage in 2011, when the global address allocation pool was exhausted. A new
protocol version, IPv6, was developed in the mid-1990s, which provides vastly
larger addressing capabilities and more efficient routing of Internet traffic.
IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all
resource managers to plan rapid adoption and conversion.
1Pv6 is not
interoperable with IPv4. in essence, it establishes a parallel version of the
Internet not directly
accessible with IPv4 software. This means software upgrades or translator
facilities are necessary for networking devices that
need to communicate on both networks. Most modern computer operating systems
already support both versions of the Internet Protocol. Network
infrastructures, however, are still lagging in this development. Aside from the
complex array of physical connections that make
up its infrastructure, the Internet is facilitated by bi- or multi-lateral
commercial contracts (e.g., peering
agreements), and by technical specifications or protocols that describe how to
exchange data over the network. Indeed, the Internet is defined by its
interconnections and routing policies.
Routing
Internet Service Providers connect
customers (thought of at the "bottom" of the routing hierarchy) to customers of other ISPs. At the "top"
of the routing hierarchy are ten or so Tier 1 networks, large telecommunication
companies which exchange traffic directly "across" to all other Tier
1 networks via unpaid peering agreements.
Tier 2 networks buy Internet transit from other ISP to reach at least some
parties on the global Internet, though they may also engage in unpaid peering
(especially for local partners of a
similar size).1SPs can use a single "upstream" provider for
connectivity, or use multihoming to
provide protection from problems with individual links. Internet exchange points
create physical connections between multiple ISPs, often hosted in buildings
owned by independent third parties.
Computers
and routers use routing tables to direct IF' packets among locally connected
machines. Tables can be
constructed manually or automatically via DHCP for an individual computer or a
routing protocol for routers themselves. In single-homed
situations, a default route usually points "up" toward an ISP providing transit. Higher-level ISPs use
theBorder Gateway Protocol to sort out paths to any given range of IP
addresses across the complex connections of the global Internet.
Academic institutions, large companies,
governments, and other organizations can perform the same role as 1SPs, engaging in peering and purchasing
transit on behalf of their internal networks of individual computers.
Research networks tend to interconnect into large subnetworks such
as GEANT,
GLORIA), Internet2, and the UK's national research and education network,
JANET. These in turn are built around smaller networks.
Not all computer networks are connected to the Internet.
For example, some classified United States websites are only accessible from
separate secure networks.
General
structure
The Internet structure and its
usage characteristics have been studied extensively. It has been determined that both the Internet IP routing
structure and hypertext links of the World Wide Web are examples of scale-free
networks.
Many computer scientists describe the
Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is
heterogeneous; for instance, data transfer rates and physical characteristics
of connections vary widely. The Internet exhibits "emergent
phenomena" that depend on its
large-scale organization. For example, data transfer rates exhibit temporal
self-similarity. The principles of the routing and addressing methods
for traffic in the Internet reach back to their origins in the 1960s when the
eventual scale and popularity of the network could not be anticipated. Thus,
the possibility of developing alternative structures is investigated.1331The
Internet structure was found to be highly robust to random failures and very
vulnerable to high degree attacks.
Services
World
Wide Web
Many people use the terms Internet and World Wide Web, or Just the Web, Interchangeably, but the two terms are not
synonymous. The World Wide Web is a global set of documents, imagesand other
resources, logically interrelated by hyperlinks and referenced with Uniform
Resource Identifiers (URIs). URls symbolically identify services, servers, and
other databases, and the documents and resources that they can provide.
Hypertext Transfer Protocol (HTTP) is the main access protocol of the World
Wide Web, but it is only one of the hundreds
of communication protocols used on the Internet. Web services also use HTTP to allow software systems
to communicate in order to share and exchange business logic and data.
World Wide Web browser software, such as
Microsoft's Internet Explorer,
Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets
users navigate from one web page to another via hyperlinks embedded in the
documents. These documents may also contain any combination of computer data, including graphics, sounds,
text, video, multimedia and interactive content that runs while the user is
interacting with the page. Client-side software can include animations, games,
office applications and scientific demonstrations. Through keyword-driven
Internet researchusing search engines like
Yahoo! and Google, users worldwide have easy, instant access to a vast and
diverse amount of online information. Compared to printed media, books,
encyclopedias and traditional libraries, the World Wide Web has enabled the
decentralization of information on a large scale.
The Web has also enabled individuals and
organizations to publish ideas and information to a potentially large audience online at greatly reduced expense
and time delay. Publishing a web page, a blog, or building a website
involves little initial cost and many cost-free services are available.
Publishing and maintaining large, professional web sites with attractive,
diverse and up-to-date information is still a difficult
and expensive proposition, however. Many individuals and some companies and
groups use web logs or blogs, which are largely used as easily updatable online diaries.
Some commercial organizations
encourage staff to communicate advice in their areas of specialization in the
hope that visitors will be impressed by the expert knowledge and free
information, and be attracted to the corporation as a result. One example of
this practice is Microsoft, whose product developers publish their personal
blogs in order to pique the public's interest in their work. Collections of
personal web pages
published by large service providers remain popular, and have become
increasingly sophisticated. Whereas operations such as Angelfire andGeoCities
have existed since the early days of the Web, newer offerings from, for
example, Facebook and Twitter currently have large followings. These operations
often brand themselves associal network services rather than simply as web page
hosts.
Advertising on popular web pages
can be lucrative, and e-commerce or the sale of products and services directly
via the Web continues to grow.
When the Web began in the 1990s,
a typical web page was stored in completed form on a web server, formatted in
HTML, ready to be sent to a user's browser in response to a request. Over time,
the process of creating and serving web pages has become more automated and
more dynamic. Websites are often created using content management or wiki
software with, initially, very little content. Contributors to these systems,
who may be paid staff, members of a club or other organization or members of
the public, fill underlying databases with content using editing pages designed
for that purpose, while casual visitors view and read this content in its final
HTML. form. There may or may not be editorial, approval and security systems
built into the process of taking newly entered content and making it available
to the target visitors.
Communication
Email is an important communications service
available on the Internet. The concept of sending electronic text messages
between parties in a way analogous to mailing letters or memos predates the
creation of the Internet. Pictures, documents and other files are sent as email
attachments. Emails can be cc-ed to multiple email addresses.
Internet telephony is another
common communications service made possible by the creation of the Internet.
VolP stands for Voice-over-Internet Protocol, referring to the protocol that
underlies all Internet communication. The idea began in the early 1990s with
walkie-talkie-like voice applications for personal computers. In recent years
many VolP systems have become as easy to use and as convenient as a normal
telephone. The benefit is that, as the Internet carries the voice traffic, VolP
can be free or cost much less than a traditional telephone call, especially
over long distances and especially for those with always-on Internet
connections such as cable or ADSL. VolP is maturing into a competitive
alternative to traditional telephone service. Interoperability between
different providers has improved and the ability to call or receive a call from
a traditional telephone is available. Simple, inexpensive VolP network adapters
are available that eliminate the need for a personal computer.
Voice quality can still vary from call to
call, but is often equal to and can even exceed that of traditional calls.
Remaining problems for VolP include emergency telephone number dialing and
reliability. Currently, a few VolP providers provide an emergency service, but
it is not universally available. Traditional phones are line-powered and
operate during a power failure; VolP does not do so without a backup power
source for the phone equipment and the Internet access devices. VoIP has also
become increasingly popular for gaming applications, as a form of communication
between players. Popular VolP clients for gaming include Ventrilo and
Teamspeak. Wii, PlayStation 3, and )(box 360 also offer VolP chat features.
Data transfer
File sharing is an example of
transferring large amounts of data across the Internet. A computer file can be
emailed to customers, colleagues and friends as an attachment. It can be
uploaded to a website or FrP server for easy download by others. It can be put
into a "shared location" or onto a file server for instant use by
colleagues. The load of bulk downloads to many users can be eased by the use of
"mirror" servers or peer-to-peer networks. In any of these cases, access
to the file may be controlled by user authentication, the transit of the file
over the Internet may be obscured by encryption, and money may change hands for
access to the file. The price can be paid by the remote charging of funds from,
for example, a credit card whose details are also passed — usually fully
encrypted — across the Internet. The origin and authenticity of the file
received may be checked by digital signatures or by MD5 or other message
digests. These simple features of the Internet, over a worldwide basis, are
changing the production, sale, and distribution of anything that can be reduced
to a computer file for transmission. This includes all manner of print
publications, software products, news, music, film, video, photography,
graphics and the other arts. This In turn has caused seismic shifts in each of
the existing industries that previously controlled the production and
distribution of these products.
Streaming media is the real-time delivery of
digital media for the immediate consumption or enjoyment by end users. Many
radio and television broadcasters provide Internet feeds of their live audio
and video productions. They may also allow time-shift viewing or listening such
as Preview, Classic Clips and Listen Again features. These providers have been
joined by a range of pure Internet "broadcasters" who never had
on-air licenses. This means that an Internet-connected device, such as a
computer or something more specific, can be used to access on-line media in
much the same way as was previously possible only with a television or radio
receiver. The range of available types of content is much wider, from
specialized technical webcasts to on-demand popular multimedia services.
Podcasting is a variation on this theme, where — usually audio — material is
downloaded and played back on a computer or shifted to a portable media player
to be listened to on the move. These techniques using simple equipment allow
anybody, with little censorship or licensing control, to broadcast audio-visual
material worldwide.
Digital media streaming increases
the demand for network bandwidth. For example, standard image quality needs 1
Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the
top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.
Webcams are a low-cost extension
of this phenomenon. While some webcams can give full-frame-rate video, the
picture either is usually small or updates slowly. Internet users can watch
animals around an African waterhole, ships in the Panama Canal, traffic at a
local roundabout or monitor their own premises, live and in real time. Video
chat rooms and video conferencing are also popular with many uses being found
for personal webcams, with and without two-way sound. YouTube was founded on 15
February 2005 and is now the leading website for free streaming video with a
vast number of users. It uses a flash-based web player to stream and show video
files. Registered users may upload an unlimited amount of video and build their
own personal profile. YouTube claims that its users watch hundreds of millions,
and upload hundreds of thousands of videos daily.
Access
Common methods of Internet access
in homes include dial-up, landline broadband (over coaxial cable, fiber optic
or copper wires), Wi-Fi, satellite and 3G/4G technology cell phones. Public
places to use the Internet include libraries and Internet cafes, where
computers with Internet connections are available. There are also Internet
access points in many public places such as airport halls and coffee shops, in
some cases just for brief use while standing. Various terms are used, such as
"public Internet kiosk", "public access terminal", and
"Web payphone". Many hotels now also have public terminals, though
these are usually fee-based. These terminals are widely accessed for various
usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides
wireless access to computer networks, and therefore can do so to the Internet
itself. Hotspots providing such access include Wi-Fl cafes, where would-be
users need to bring their own wireless-enabled devices such as a laptop or PDA.
These services may be free to all, free to customers only, or fee-based. A
hotspot need not be limited to a confined location. A whole campus or park, or
even an entire city can be enabled.
Grassroots efforts have led to
wireless community networks. Commercial Wi-Fl services covering large city
areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia,
Chicago and Pittsburgh. The Internet can then be accessed from such places as a
park bench. Apart from Wi-Fi, there have been experiments with proprietary
mobile wireless networks like Ricochet, various high-speed data services over
cellular phone networks, and fixed wireless services. High-end mobile phones
such as smartphones in general come with Internet access through the phone
network. Web browsers such as Opera are availabie on these advanced handsets,
which can also run a wide variety of other Internet software. More mobile
phones have Internet access than PCs, though this is not as widely used. An
Internet access provider and protocol matrix differentiates the methods used to
get online.
An Internet blackout or outage
can be caused by local signaling interruptions. Disruptions of submarine
communications cables may cause blackouts or slowdowns to large areas, such as
in the2008 submarine cable disruption. Less-developed countries are more
vulnerable due to a small number of high-capacity links. Land cables are also
vulnerable, as in 2011 when a woman digging for scrap nnetal severed most
connectivity for the nation of Armenia. Intemet blackouts affecting almost
entire countries can be achieved by governments as a form of Internet
censorship, as in the blockage of the Internet in Egypt, whereby approximately
93% of networks were without access in 2011 in an attempt to stop mobilization
for anti-government protests.
SPREEDSHETS
A spreadsheet is an interactive
computer application program for organization and analysis of information in
tabular form. Spreadsheets developed as computerized simulations of paper
accounting worksheets. The program operates on data represented as cells of an
array, organized in rows and columns. Each cell of the array is a
model—view—controller element that can contain either numeric or text data, or
the results of formulas that automatically calculate and display a value based
on the contents of other cells.
The user of the spreadsheet can
make changes in any stored value and obsen/e the effects on calculated values.
This makes the spreadsheet useful for "what-if' analysis since many cases
can be rapidly investigated without tedious manual recalculation. Modern
spreadsheet software can have multiple interacting sheets, and can display data
either as text and numerals, or in graphical form.
In addition to the fundamental operations of
arithmetic and mathematical functions, modern spreadsheets provide built-in
functions for common financial and statistical operations. Such calculations as
net present value or standard deviation can be applied to tabular data with a
pre-programmed function in a formula. Spreadsheet programs also provide
conditional expressions, functions to convert between text and numbers, and
functions that operate on strings of text.
Spreadsheets have now replaced
paper-based systems throughout the business world. Although they were first
developed for accounting or bookkeeping tasks, they now are used extensively in
any context where tabular lists are built, sorted, and shared.
Visicalc was the first electronic
spreadsheet on a microcomputer, and it helped tum the Apple II computer into a
popular and widely used system. Lotus 1-2-3 was the leading spreadsheet whenDOS
was the dominant operating system. Excel now has the largest market share on
the Windows and Macintosh platforms. A spreadsheet program is a standard
feature of an office productivity suite; since the advent of web apps, office
suites now also exist in web app form.
Spreadsheet use
A modern spreadsheet file consists of multiple
worksheets (usually called by the shorter name sheets) that make up one
workbook, with each file being one workbook. A cell on one sheet is capable of
referencing cells on other, different sheets, whether within the same workbook
or even, in some cases, in different workbooks.
Spreadsheets share many
principles and traits of databases, but spreadsheets and databases are not the
same thing. A spreadsheet is essentially just one table, whereas a database is
a collection of many tables with machine-readable semantic relationships between
them. While it is true that a workbook that contains three sheets is indeed a
file containing multiple tables that can interact with each other, it lacks the
relational structure of a database. Spreadsheets and databases are
interoperable—sheets can be imported into databases to become tables within
them, and database queries can be exported into spreadsheets for further
analysis.
A spreadsheet program is one of
the main components of an office productivity suite, which usually also contain
aword processor, a presentation program, and a database management system.
Programs within a suite use similar commands for similar functions. Usually
sharing data between the components is easier than with a non-integrated
collection of functionally equivalent programs. This was particularly an
advantage at a time when many personal computer systems used text-mode displays
and commands, instead of a graphical user interface.
History
Paper spreadsheets
The word
"spreadsheet" came from "spread" in its sense of a newspaper
or magazine item (text and/or graphics) that covers two facing pages, extending
across the center fold and treating the two pages as one large one. The
compound word "spread-sheet" came to mean the format used to present
book-keeping ledgers—with columns for categories of expenditures across the
top, invoices listed down the left margin, and the amount of each payment in
the cell where its row and column intersect—which were, traditionally, a
"spread" across facing pages of a bound ledger (book for keeping
accounting records) or on oversized sheets of paper (termed "analysis
paper") ruled into rows and columns in that format and approximately twice
as wide as ordinary paper.
Lotus 1-2-3 and other MS-DOS spreadsheets
The acceptance of the IBM PC following its
introduction in August, 1981, began slowly, because most of the programs
available for it were translations from other computer models. Things changed
dramatically with the introduction of Lotus 1-2-3 in November, 1982, and
release for sale in January, 1983. Since it was written especially for the IBM
PC, it had good performanceand became the killer app for this PC. Lotus 1-2-3
drove sales of the PC due to the improvements in speed and graphics compared to
VisiCalc on the Apple II. Lotus 1-2-3, along with its competitor Borland
Quattro, soon displaced VisiCalc. Lotus 1-2-3 was released on January 26, 1983,
started outselling then-most-popular VisiCalc the very same year, and for a
number of years was the leading spreadsheet for DOS.
Microsoft Excel
Microsoft developed Excel on the
Macintosh platform for several years, and then ported it to Windows 2.0. The
Windows 3.x platforms of the early 1990s made it possible for Excel to take
market share from Lotus. By the time Lotus responded with usable Windows
products, Microsoft had begun to assemble their Office suite. Starting in the
mid 1990s continuing through the present, Microsoft Excel has dominated the
commercial electronic spreadsheet market.
Open source software
Gnumeric is a free, cross-platform spreadsheet
program that is part of the GNOME Free Software Desktop Project. OpenOffice.org
Calc and the very closely related LibreOffice Calc are free and open-source
spreadsheets, also licensed under the GPL.
Web based spreadsheets
With the advent of advanced web
technologies such as Ajax circa 2005, a new generation of online spreadsheets
has emerged. Equipped with a rich Internet application user experience, the
best web based online spreadsheets have many of the features seen in desktop
spreadsheet applications. Some of them such as Office Web Apps or Google
Spreadsheets also have strong multi-user collaboration features and / or offer
real time updates from remote sources such as stock prices and currency
exchange rates.
Other spreadsheets
A list of current spreadsheet
software
·
IBM Lotus Symphony (2007)
• Corel Quattro Pro (WordPerfect Office)
•
Kspread
•
Kingsoft Spreadsheets
•
Numbers is Apple Inc.'s spreadsheet software, part of iWork.
• ZCubes-Calci
•
Resolver One
• GNU Oleo — A traditional terminal
mode spreadsheet for UNIX/UNIX-like systems
Concepts
The main concepts are those of a
grid of cells, called sheet, with either raw data, called values, or formulas
in the cells. Formulas say how to mechanically compute new values from existing
values. Values are generally numbers, but can be also pure text, dates, months,
etc. Extensions of these concepts include logical spreadsheets. Various tools
for programming sheets, visualizing data, remotely connecting sheets,
displaying cells dependencies, etc. are commonly provided.
Cells
A "cell" can be thought
of as a box for holding data. A single cell is usually referenced by its column
and row (A2 would represent the cell below containing the value 10). Usually
rows, representing the dependant variables, are referenced in decimal notation
starting from 1, while columns representing the independent variables use
26-adic bijective numeration using the letters A-Z as numerals. Its physical
size can usually be tailored for its content by dragging its height or width at
box intersections (or for entire columns or rows by dragging the column or rows
headers).
An array of cells is called a
sheet or worksheet. It is analogous to an array of variables in a conventional
computer program (although certain unchanging values, once entered, could be
considered, by the same analogy, constants). In most implementations, many
worksheets may be located within a single spreadsheet. A worksheet is simply a
subset of the spreadsheet divided for the sake of clarity. Functionally, the
spreadsheet operates as a whole and all cells operate as global variables
within the spreadsheet ('read' access only except its own containing cell).
A cell may contain a value or a
formula, or it may simply be left empty. By convention, formulas usually begin
with = sign.
Sheets
In the earliest spreadsheets,
cells were a simple two-dimensional grid. Over time, the model has expanded to
include a third dimension, and in some cases a series of named grids, called
sheets. The most advanced examples allow inversion and rotation operations
which can slice and project the data set in various ways.
Formulas
A formula identifies the
calculation needed to place the result in the cell it is contained within. A
cell containing a formula therefore has two display components; the formula
itself and the resulting value. The formula is normally only shown when the
cell is selected by "clicking" the mouse over a particular cell;
otherwise it contains the result of the calculation.
Functions
Spreadsheets usually contain a
number of supplied functions, such as arithmetic operations (for example,
summations, averages and so forth), trigonometric functions, statistical
functions, and so forth. In addition there is often a provision foruser-defined
functions. In Microsoft Excel these functions are defined using Visual Basic
for Applications in the supplied Visual Basic editor, and such functions are
automatically accessible on the worksheet. In addition, programs can be written
that pull information from the worksheet, perform some calculations, and report
the results back to the worksheet. In the figure, the name sq is user-assigned,
and function sq is introduced using the Visual Basic editor supplied with
Excel. Name Manager displays the spreadsheet definitions of named variables x
& y.
Spreadsheet risk
Spreadsheet risk is the risk
associated with deriving a materially incorrect value from Excel or a similar
spreadsheet application that will be utilised in making a related (usually
numerically based) decision. Examples include the valuation of an asset, the
determination of financial accounts, the calculation of medicinal doses or the
size of load-bearing beam for structural engineering. Therisk may arise from
inputting erroneous or fraudulent data values, from mistakes (or incorrect
changes) within the logic of the spreadsheet or the omission of relevant
updates (e.g. out of date exchange rates). Some single-instance errors have
exceeded US$1 billion. Because spreadsheet risk is principally linked to the
actions (or inaction) of individuals it is defined as a sub-category of
operational risk.
DATABASES
A database is a structured
collection of data. The data are typically organized to model relevant aspects
of reality (for example, the availability of rooms in hotels), in a way that
supports processes requiring this information (for example, finding a hotel
with vacancies).
The term database is correctly
applied to the data and their supporting data structures, and not to the
database management system (DBMS). The database data collection with DBMS is
called adatabase system.
The term database system implies that the data
are managed to some level of quality (measured in terms of accuracy,
availability, usability, and resilience) and this in turn often implies the use
of a general-purpose database management system (DBMS). A general-purpose DBMS
is typically a complex software system that meets many usage requirements to
properly maintain its databases which are often large and complex.
This is specially the case with
client-server, near-real time transactional systems, in which multiple users
have access to data, data is concurrently entered and inquired for in ways that
preclude single-thread batch processing. Most of the complexity of those
requirements are still present with personal, desktop-based database systems.
Well known DBMSs include Oracle, FoxPro, IBM
DB2, Linter, Microsoft Access, Microsoft SQL Server, MySQL, PostgreSQL and
SQLite. A database is not generally portable across different DBMS, but
different DBMSs can inter-operate to some degree by using standards like SQL
and ODBC together to support a single application built over more than one
database. A DBMS also needs to provide effective run-time execution to properly
support (e.g., in terms of performance, availability, and security) as many
database end-users as needed.
A way to classify databases
involves the type of their contents, for example: bibliographic, document-text,
statistical, or multimedia objects. Another way is by their application area,
for example: accounting, music compositions, movies, banking, manufacturing, or
insurance.
The term database may be narrowed
to specify particular aspects of organized collection of data and may refer to
the logical database, to the physical database as data content in computer data
storage or to many other database sub-definitions.
History
Database concept
The database concept has evolved since the
1960s to ease increasing difficulties in designing, building, and maintaining
complex information systems (typically with many concurrent end-users, and with
a large amount of diverse data). It has evolved together with database
management systems which enable the effective handling of databases. Though the
terms database and DBMS define different entities, they are inseparable: a
database's properties are determined by its supporting DBMS. The Oxford English
dictionary cites a 1962 technical report as the first to use the term
"data-base." With the progress in technology in the areas of
processors, computer memory, computer storage and computer networks, the sizes,
capabilities, and performance of databases and their respective DBMSs have
grown in orders of magnitudes. For decades it has been unlikely that a complex
information system can be built effectively without a proper database supported
by a DBMS. The utilization of databases is now spread to such a wide degree
that virtually every technology and product relies on databases and DBMSs for
its development and commercialization, or even may have such embedded in it.
Also, organizations and companies, from small to large, heavily depend on
databases for their operations.
No widely accepted exact
definition exists for DBMS. However, a system needs to provide considerable
functionality to qualify as a DBMS. Accordingly its supported data collection
needs to meet respective usability requirements (broadly defined by the
requirements below) to qualify as a database. Thus, a database and its
supporting DBMS are defined here by a set of general requirements listed below.
Virtually all existing mature DBMS products meet these requirements to a great
extent, while less mature either meet them or converge to meet them.
Evolution
of database and DBMS technology
The introduction of the term
database coincided with the availability of direct-access storage (disks and
drums) from the mid-1960s onwards. The term represented a contrast with the
tape-based systems of the past, allowing shared interactive use rather than
daily batch processing.
In the earliest database systems,
efficiency was perhaps the primary concern, but it was already recognized that
there were other important objectives. One of the key aims was to make the data
independent of the logic of application programs, so that the same data could
be made available to different applications.
In the period since the 1970s database
technology has kept pace with the increasing resources becoming available from
the computing platform: notably the rapid increase in affordable capacity and
speed of disk storage, and of main memory. This has enabled ever larger databases
and higher throughput to be achieved.
The first generation of
general-purpose database systems were navigational, applications typically
accessed data by following pointers from one record to another. The two main
data models at this time were the hierarchical model, epitomized by IBM's IMS
system, and the Codasyl model (Network model), implemented in a number of
products such as IDMS.
The relational model, first
proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting
that applications should search for data by content, rather than by following
links. This was considered necessary to allow the content of the database to
evolve without constant rewriting of links and pointers. The relational model
is made up of ledger-style tables, each used for a different type of entity.
Data may be freely inserted, deleted and edited in these tables, with the DBMS
(DataBase Management System) doing whatever maintenance needed to present a
table view to the application/user. The relational part comes from entities
referencing other entities in what is known as one-to-many relationship, like a
traditional hierarchical model, and many-to-many relationship, like a
navigational (network) model. Thus, a relational model can express both
hierarchical and navigational models, as well as its native tabular model,
allowing for pure or combined modeling in terms of these three models, as the
application requires.
The earlier expressions of the
relational model did not make relationships between different entities explicit
in the way practitioners were used to back then, but as primary keys and
foreign keys. These keys, though, can be also seen as pointers in their own
right, stored in tabular form. This use of keys rather than pointers
conceptually obscured relations between entities, at least the way it was
presented back then. Thus, the wisdom at the time was that the relational model
emphasizes search rather than navigation, and that it was a good conceptual
basis for a query language, but less well suited as a navigational language. As
a result, another data model, the entity-relationship model which emerged
shortly later (1976), gained popularity for database design, as it emphasized a
more familiar description than the earlier relational model. Later on, entity-relationship
constructs were retrofitted as a data modeling construct for the relational
model, and the difference between the two have become irrelevant.
Earlier relational system
implementations lacked the sophisticated automated optimizations of conceptual
elements and operations versus their physical storage and processing
counterparts, present in modern DBMSs (DataBase Management Systems), so their
simplistic and literal implementations placed heavy demands on the limited
processing resources at the time. It was not until the mid 1980s that computing
hardware became powerful enough to allow relational systems (DBMSs plus
applications) to be widely deployed. By the early 1990s, however, relational
systems were dominant for all large-scale data processing applications, and
they remain dominant today (2012) except in niche areas. The dominant database
language is the standard SQL for the Relational model, which has influenced
database languages for other data models.
The rigidity of the relational
model, in which all data are held in related tables with a fixed structure of
rows and columns, has increasingly been seen as a limitation when handling
information that is richer or more varied in structure than the traditional
"ledger-hook' data of corporate information systems. These limitations
come to play when modeling document databases, engineering databases,
multimedia databases, or databases used in the molecular sciences.
Most of that rigidity, though, is due to the
need to represent new data types other than text and text-alikes within a
relational model. Examples of unsupported data types are :
• graphics (and operations such
as pattern-matching and OCR)
• Multidimensional constructs
such as 2D (geographical), 3D (geometrical), and multidimensional hypercube
models (data analysis).
• XML (an hierarchical data modeling
technology evolved from EDS and HTML), used for data interchange among
dissimilar systems.
More fundamental conceptual limitations came
with Object Oriented methodologies, with their emphasis on encapsulating data
and processes (methods), as well as expressing constructs such as events or
triggers. Traditional data modeling constructs emphasize the total separation
of data from processes, though modern DBMS do allow for some limited modeling
in terms of validation rules and stored procedures.
Various attempts have been made
to address this problem, many of them banners such as post-relational or NoSQL.
Two developments of note are the object database and the XIVIL database. The
vendors of relational databases have fought off competition from these newer
models by extending the capabilities of their own products to support a wider
variety of data types.
Database type examples
The following are examples of
various database types. Some of them are not main-stream types, but most of
them have received special attention (e.g., in research) due to end-user
requirements. Some exist as specialized DBMS products, and some have their
functionality types incorporated in existing general-purpose DBMSs. Though may
differ in nature and functionality, these various types typically have to
comply with the usability requirements below to comply as databases.
• Active database
An active database is a database
that includes an event-driven architecture which can respond to conditions both
inside and outside the database. Possible uses include security monitoring,
alerting, statistics gathering and authorization. Most modern relational
databases include active database features in the form of database trigger.
• Cloud database
A Cloud database is a database
that relies on cloud technology. Both the database and most of its DBMS reside
remotely, "in the cloud," while its applications are both developed
by programmers and later maintained and utilized by (application's) end-users
through a web browser and Open APIs. More and more such database products are
emerging, both of new vendors and by virtually all established database
vendors.
• Data warehouse
Data warehouses archive data from
operational databases and often from external sources such as market research
firms. Often operational data undergo transformation on their way into the
warehouse, getting summarized, anonymized, reclassified, etc. The warehouse
becomes the central source of data for use by managers and other end-users who
may not have access to operational data. For example, sales data might be
aggregated to weekly totals and converted from internal product codes to use
UPCs so that they can be compared with ACNielsendata. Some basic and essential
components of data warehousing include retrieving, analyzing, and mining data,
transforming,loading and managing data so as to make them available for further
use.
Operations in a data warehouse
are typically concerned with bulk data manipulation, and as such, it is unusual
and inefficient to target individual rows for update, insert or delete. Bulk
native loaders for input data and bulk SQL passes for aggregation are the norm.
• Distributed database
The definition of a distributed
database is broad, and may be utilized in different meanings. In general it
typically refers to a modular DBMS architecture that allows distinct DBMS
instances to cooperate as a single DBMS over processes, computers, and sites,
while managing a single database distributed itself over multiple computers,
and different sites. Examples are databases of local work-groups and
departments at regional offices, branch offices, manufacturing plants and other
work sites. These databases can include both segments shared by multiple sites,
and segments specific to one site and used only locally in that site.
• Document-oriented database
A document-oriented database is a
computer program designed for storing, retrieving, and managing
document-oriented, or semi structured data, information. Document-oriented
databases are one of the main categories of so-called NoSQL databases and the
popularity of the term "document-oriented database" (or
"document store") has grown with the use of the term NoSQL itself.
Utilized to conveniently store, manage, edit and retrieve documents.
• Embedded database
An embedded database system is a
DBMS which is tightly integrated with an application software that requires
access to stored data in a way that the DBMS is "hidden" from the
application's end-user and requires little or no ongoing maintenance. It is
actually a broad technology category that includes DBMSs with differing
properties and target markets. The term "embedded database" can be
confusing because only a small subset of embedded database products is used in
real-time embedded systems such as telecommunications switches andconsumer
electronics devices.
• End-user database
These databases consist of data
developed by individual end-users. Examples of these are collections of
documents, spreadsheets, presentations, multimedia, and other files. Several
products exist to support such databases. Some of them are much simpler than
full fledged DBMSs, with more elementary DBMS functionality (e.g., not
supporting multiple concurrent end-users on a same database), with basic
programming interfaces, and a relatively small "foot-print" (not much
code to run as in "regular" general-purpose databases). However, also
available general-purpose DBMSs can often be used for such purpose, if they
provide basic user-interfaces for straightforward database applications
(limited query and data display; no real programming needed), while still enjoying
the database qualities and protections that these DBMSs can provide.
• Federated database and
multi-database
A federated database is an
integrated database that comprises several distinct databases, each with its
own DBMS. It is handled as a single database by a federated database management
system (FDBMS), which transparently integrates multiple autonomous DBMSs,
possibly ofdifferent types (which makes it a heterogeneous database), and
provides them with an integrated conceptual view. The constituent databases are
interconnected via computer network, and may be geographically decentralized.
Sometime the term multi-database
is used as a synonym to federated database, though it may refer to a less
integrated (e.g., without an FDBMS and a managed integrated schema) group of
databases that cooperate in a single application. In this case typically
middleware for distribution is used which typically includes an atomic commit
protocol (ACP), e.g., the two-phase commit protocol, to allow distributed
(global) transactions (vs. local transactions confined to a single DBMS) across
the participating databases.
• Graph database
A graph database is a kind of NoSQL database
that uses graph structures with nodes, edges, and properties to represent and
store information. General graph databases that can store any graph are
distinct from specialized graph databases such as triplestores and network
databases.
• Hypermedia databases
The World Wide Web can be thought
of as a database, albeit one spread across millions of independent computing
systems. Web browsers "process" these data one page at a time,
whileweb crawlers and other software provide the equivalent of database indexes
to support search and other activities.
• Hypertext database
In a Hypertext database, any word or a piece
of text representing an object, e.g., another piece of text, an article, a
picture, or a film, can be linked to that object. Hypertext databases are
particularly useful for organizing large amounts of disparate information. For
example they are useful for organizing online encyclopedias, where users can
conveniently jump in the texts, in a controlled way, by using hyperlinks.
• In-memory database
An in-memory database (IMDB; also main memory
database or MMDB) is a database that primarily resides in main memory, but
typically backed-up by non-volatile computer data storage. Main memory
databases are faster than disk databases. Accessing data in memory reduces the
I/O reading activity when, for example, querying the data. In applications where
response time is critical, such as telecommunications network equipment, main
memory databases are often used.
• Parallel database
A parallel database, run by a
parallel DBMS, seeks to improve performance through parallelization for tasks
such as loading data, building indexes and evaluating queries. Parallel
databases improve processing and input/output speeds by using multiple central
processing units (CPUs) (including multi-core processors) and storage in
parallel. In parallel processing, many operations are performed simultaneously,
as opposed to serial, sequential processing, where operations are performed
with no time overlap.
The major parallel DBMS
architectures (which are induced by the underlying hardware architecture are:
ü
Shared memory architecture, where multiple
processors share the main memory space, as well as other data storage.
ü
Shared disk architecture, where each processing
unit (typically consisting of multiple processors) has its own main memory, but
all units share the other storage.
ü
Shared nothing architecture, where each
processing unit has its own main memory and other storage.
• Spatial database
A spatial database can store the data with
multidimensional features. The queries on such data include location based
queries, like "where is the closest hotel in my area".
• Temporal database
A temporal database is a database with
built-in time aspects, for example a temporal data model and a temporal version
of Structured Query Language (SQL). More specifically the temporal aspects
usually include valid-time and transaction-time.
• Unstructured-data database
An unstructured-data database is
intended to store in a manageable and protected way diverse objects that do not
fit naturally and conveniently in common databases. It may include email
messages, documents, journals, multimedia objects etc. The name may be
misleading since some objects can be highly structured. However, the entire
possible object collection does not fit into a predefined structured framework.
Most established DBMSs now support unstructured data in various ways, and new
dedicated DBMSs are emerging.
Major
database functional areas
The functional areas are domains and subjects
that have evolved in order to provide proper answers and solutions to the
functional requirements above.
Data models
A data model is an abstract
structure that provides the means to effectively describe specific data
structures needed to model an application. As such a data model needs
sufficient expressive power to capture the needed aspects of applications.
These applications are often typical to commercial companies and other
organizations (like manufacturing, human-resources, stock, banking, etc.). For
effective utilization and handling it is desired that a data model is
relatively simple and intuitive. This may be in conflict with high expressive
power needed to deal with certain complex applications. Thus any popular
general-purpose data model usually well balances between being intuitive and
relatively simple, and very complex with high expressive power. The application's
semantics is usually not explicitly expressed in the model, but rather implicit
(and detailed by documentation external to the model) and hinted to by data
item types' names (e.g., "part-number") and their connections (as
expressed by generic data structure types provided by each specific model).
Database languages
Database languages are dedicated programming
languages, tailored and utilized to
• define a database (i.e., its specific data
types and the relationships among them),
• manipulate its content (e.g., insert new
data occurrences, and update or delete existing ones), and
• query it (Le., request
information: compute and retrieve any information based on its data).
Database languages are data-model-specific,
i.e., each language assumes and is based on a certain structure of the data
(which typically differs among different data models). They typically have
commands to instruct execution of the desired operations in the database. Each
such command is equivalent to a complex expression (program) in a regular
programming language, and thus programming in dedicated (database) languages
simplifies the task of handling databases considerably. An expressions in a
database language is automatically transformed (by a compiler or interpreter,
as regular programming languages) to a proper computer program that runs while
accessing the database and providing the needed results. The following are
notable examples:
• 4.2.1 SQL for the relational
model
• 4.2.2 OQL for the object model
• 4.2.3 XQuery for the XML model
Implementation: database
management systems
A database management system
(DBMS) is a system that allows to build and maintain databases, as well as to
utilize their data and retrieve information from it. A DBMS implements
solutions (see Major database functional areas above) to the database usability
requirements above. It defines the database type that it supports, as well as
its functionality and operational capabilities. A DBMS provides the internal
processes for external applications built on them. The end-users of some such
specific application are usually exposed only to that application and do not
directly interact with the DBMS. Thus end-users enjoy the effects of the
underlying DBMS, but its internals are completely invisible to end-users.
Database designers and database administrators interact with the DBMS through
dedicated interfaces to build and maintain the applications' databases, and
thus need some more knowledge and understanding about how DBMSs operate and the
DBMSs' external interfaces and tuning parameters.
A DBMS consists of software that
operates databases, providing storage, access, security, backup and other
facilities to meet needed requirements. DBMSs can be categorized according to
thedatabase model(s) that they support, such as relational or XML, the type(s)
of computer they support, such as a server cluster or a mobile phone, the query
language(s) that access the database, such as SQL or XQuery, performance
trade-offs, such as maximum scale or maximum speed or others. Some DBMSs cover
more than one entry in these categories, e.g., supporting multiple query
languages. Database software typically support the Open Database Connectivity
(ODBC) standard which allows the database to integrate (to some extent) with
other databases.
The development of a mature
general-purpose DBMS typically takes several years and many man-years.
Developers of DBMS typically update their product to follow and take advantage
of progress in computer and storage technologies. Several DBMS products like
Oracle and IBM D82 have been in on-going development since the 1970s-1980s.
Since DBMSs comprise a significanteconomical market, computer and storage
vendors often take into account DBMS requirements in their own development
plans.
Database storage
Database storage is the container of the
physical materialization of a database. It comprises the Internal (physical)
level in the database architecture. It also contains all the information needed
(e.g., metadata, "data about the data", and internal data structures)
to reconstruct the conceptual level and external level from the Internal level
when needed. It is not part of the DBMS but rather manipulated by the DBMS (by
its Storage engine; see above) to manage the database that resides in it. Though
typically accessed by a DBMS through the underlying operating system (and often
utilizing the operating systems' file systems as intermediates for storage
layout), storage properties and configuration setting are extremely important
for the efficient operation of the DBMS, and thus are closely maintained by
database administrators. A DBMS, while in operation, always has its database
residing in several types of storage (e.g., memory and external storage). The
database data and the additional needed information, possibly in very large
amounts, are coded into bits. Data typically reside in the storage in
structures that look completely different from the way the data look in the
conceptual and external levels, but in ways that attempt to optimize (the best possible)
these levels' reconstruction when needed by users and programs, as well as for
computing additional types of needed information from the data (e.g., when
querying the database).
In principle the database storage can be
viewed as a linear address space, where every bit of data has its unique
address in this address space. Practically only a very small percentage of
addresses is kept as initial reference points (which also requires storage),
and most of the database data are accessed by indirection using displacement
calculations (distance in bits from the reference points) and data structures
which define access paths (using pointers) to all needed data in an effective
manner, optimized for the needed data access operations.
PA 'T 3
"Creative
Software"
Graphic Design
Graphic design is a creative process—most often involving a client
and a designer and usually completed in conjunction with producers of form
(i.e., printers, signmakers, etc.)—undertaken in order to convey a specific message (or messages) to a targeted
audience. The term "graphic design" can also refer to a number
of artistic and professional disciplines that focus on visual communication and
presentation. The field as a whole is also often referred to as Visual
Communication or Communication Design. Various methods are used to create and combine words,
symbols, and images to create a visual representation of ideas and messages. A graphic designer may use
a combination of typography, visual arts and page layout techniques to produce the final result.
Graphic design often refers to both the process (designing) by which the
communication is created and the products (designs) which are generated.
Common
uses of graphic design include identity (logos and branding), publications
(magazines, newspapers, and books), advertisements and product packaging. For
example, a product package might include a logo or other artwork, organized text and pure design elements
such as shapes and color which unify the piece.
Composition is one of the most important features of graphic design, especially
when using pre-existing materials or diverse elements.
History
While
Graphic Design as a discipline has a relatively recent history, with the term
"graphic design" first coined by William Addison Dwiggins in 1922, graphic design-like
activities span the history of humankind: from the caves of Lascaux, to Rome's Trajan's Column to the
illuminated manuscripts of the Middle Ages, to the dazzling neons of Ginza. In
both this lengthy history and in the relatively recent explosion
of visual communication in the 20th and 21st centuries, there is sometimes a
blurring distinction and over-lapping of
advertising art, graphic design and fine art. After all, they share many of the
same elements, theories, principles, practices and languages, and sometimes the
same benefactor or client. In advertising art
the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to
information, form to ideas, expression and feeling to artifacts that document
human experience."
The advent of printing
During the
Tang Dynasty (618-907) between the 7th and 9th century AD, wood blocks were cut
to print on textiles and later to reproduce Buddhist
texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th
century, longer scrolls and books were produced using movable type printing
making books widely available during the Song dynasty (960-1279). Sometime around 1450, Johann Gutenberg's printing press
made books widely available in Europe. The book design of Aldus Manutius
developed the book structure which would become the foundation of western
publication design. This era of graphic design is called Humanist or Old Style.
In late 19th century Europe, especially
in the United Kingdom, the movement began to separate graphic design from fine
art.
In 1849, Henry Cole became one of the
major forces in design education in Great Britain, informing the government of the importance of design in hislournal
of Design and Manufactures. He organized the Great Exhibition as a
celebration of modern industrial technology and Victorian design.
From 1891
to 1896, William Morris' Kelmscott Press published books that are some of the
most significant of the graphic design products of the
Arts and Crafts movement, and made a very lucrative business of creating books
of great stylistic refinement and selling them to the wealthy for a premium. Morris proved that a market existed for works of
graphic design in their own right and helped pioneer the separation of
design from production and from fine art. The work of the Kelmscott Press is
characterized by its obsession with historical styles. This historicism was,
however, important as it amounted to the first significant reaction to the stale state of
nineteenth-century graphic design. Morris' work,
along with the rest of the Private Press movement, directly influenced Art
Nouveau and is indirectly responsible for developments in early
twentieth century graphic design in general.
Twentieth
century design
The name "Graphic Design"
first appeared in print in the 1922 essay "New Kind of Printing Calls for
New Design" by William Addison Dwiggins, an American book designer in the
early 20th century.
Raffe's Graphic Design, published
in 1927, is considered to be the first book to use "Graphic Design"
in its title.
The signage in the London Underground is a classic design example' of
the modern era and used a typeface designed by Edward Johnston in 1916.
In the
1920s, Soviet constructivism applied 'intellectual production' in different
spheres of production. The
movement saw individualistic art as useless in revolutionary Russia and thus
moved towards creating objects for utilitarian
purposes. They designed buildings, theater sets, posters, fabrics, clothing,
furniture, logos, menus, etc.
Jan Tschichold codified the principles
of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book
as being fascistic, but it remained very influential, Tschichold, Bauhaus typographers such as Herbert Bayer and
Laszlo Moholy-Nagy, and El Lissitzky have greatly influenced graphic
design as we know it today. They pioneered production techniques and stylistic
devices used throughout the twentieth century. The following years saw graphic
design in the modern style gain widespread acceptance and application. A
booming post-World War II American economy established a greater need for
graphic design, mainly advertising and packaging. The emigration of the German
Bauhaus school of design to Chicago in 1937 brought a "mass-produced"
minimalism to America; sparking a wild fire of "modern" architecture
and design. Notable names in mid- century modern design include Adrian Frutiger,
designer of the typefaces Univers and Frutiger; Paul Rand, who, from the
late 1930s until his death in 1996, took the principles of the Bauhaus and
applied them to popular advertising and logo
design, helping to create a uniquely American approach to European
minimalism while becoming one of the principal pioneers of the subset of
graphic design known as corporate identity;
and Josef Multer-Brockmann, who designed posters in a severe yet accessible
manner typical of the 1950s and 1970s era.
The growth of the professional graphic design industry has grown in
parallel with the rise
of consumerism. This has raised some
concerns and criticisms, notably from within the graphic design community with
the First Things First manifesto. First launched by Ken Garland in 1964, it was
republished as the First Things First 2000
manifesto in 1999 in the magazine Emigre 51 stating "We propose a reversal of
priorities in favor of more useful, lasting and democratic forms of
communication - a mindshift away from product marketing and toward the
exploration and production of a new kind of meaning.
The scope of debate is shrinking; it must expand. Consumerism is running
uncontested; it must be challenged by other perspectives expressed, in
part, through the visual languages and resources of design." Both editions attracted signatures from respected design
practitioners and thinkers, for example;
Rudy VanderLans, Erik Spiekermann, Ellen Lupton andRick Poynor. The 2000
manifesto was also notably published in Adbusters, known for its strong
critiques of visual culture.
Applications
From road signs to technical schematics,
from interoffice memorandums to reference manuals, graphic design enhances
transfer of knowledge and visual messages. Readability and legibility is
enhanced by improving the visual presentation and layout of text.
Design can also aid in selling a product
or idea through effective visual communication. It is applied to products and elements of company identity likelogos, colors,
packaging, and text. Together these are defined as branding (see also
advertising). Branding has increasingly become important in the range of services offered by many graphic designers,
alongside corporate identity. Whilst the terms are often used
interchangeably, branding is more strictly related to the identifying mark or
trade name for a product or service, whereas corporate identity can have a
broader meaning relating to the structure and ethos of a company, as well as to
the company's external image. Graphic designers will often form part of a team
working on corporate identity and branding projects. Other members of that team
can include marketing professionals, communications consultants and commercial
writers.
Textbooks
are designed to present subjects such as geography, science, and math. These
publications have layouts which illustrate theories
anddiagrams. A common example of graphics in use to educate is diagrams of
human anatomy. Graphic design is also applied to layout and formatting of
educational material to make the information more accessible and more readily
understandable.
Graphic design is applied in the entertainment
industry in decoration, scenery, and visual story telling. Other
examples of design for entertainment purposes include
novels, comic books, DVD covers, opening credits
and closing credits in filmmaking, and programs and props on stage. This could
also include artwork used for t-shirts and other items screenprinted for
sale.
From scientific journals to news
reporting, the presentation of opinion and facts is often improved with
graphics and thoughtful compositions of visual information - known as
information design. Newspapers, magazines, blogs, television and film
documentaries may use graphic design to inform and entertain. With the advent
of the web, information designers with experience in interactive tools such as
Adobe Flash are increasingly being used to illustrate the background to news
stories.
Skills
A
graphic design project may involve the stylization and presentation of existing
text and either preexisting imagery or images developed by the graphic
designer. For example, a newspaper story begins with the journalists and
photojournalists and then becomes the graphic designer's job to organize the
page into a reasonable layout and determine if any other graphic elements
should be required. In a magazine article or advertisement, often the graphic
designer or art director will commission photographers or illustrators to
create original pieces just to be incorporated into the design layout. Or the designer may utilize stock imagery or
photography. Contemporary design practice has been extended to the modern computer, for example in the
use of WYSIWYG user interfaces, often referred to as interactive design, or multimedia design.
Visual arts
Before any graphic elements may be
applied to a design, the graphic elements must be originated by means of visual art skills. These graphics are
often (but not always) developed by a graphic designer. Visual arts
include works which are primarily visual in nature using anything from
traditional media, to photography or computer generated art. Graphic design
principles may be applied to each graphic art element individually as well as
to the final composition.
Typography
Typography
is the art, craft and techniques of type design, modifying type glyphs, and
arranging type. Type glyphs (characters) are created and modified using a
variety of illustration techniques. The arrangement of
type is the selection of typefaces, point size, tracking (the space between all
characters used), kerning (the space between two specific characters), and
leading (line spacing).
Typography is performed by typesetters,
compositors, typographers, graphic artists, art directors, and clerical workers.
Until the Digital Age, typography was a specialized occupation. Digitization
opened up typography to new generations of visual designers and lay users.
Page layout
The page
layout aspect of graphic design deals with the arrangement of elements (content)
on a page, such as image placement, and text layout and style. Beginning from
early illuminated pages in hand- copied books of the
Middle Ages and proceeding down to intricate modern magazine and catalogue
layouts, structured page design has long been a consideration in printed
material. With print media, elements
usually consist of type (text), images (pictures), and occasionally
place-holder graphics for elements that are not printed with ink such as
die/laser cutting, foil stamping orblind embossing.
Interface design
Since the advent of the World Wide Web
and computer software development, many graphic designers have become involved in interface design. This
has included web design and software design, when end user interactivity
is a design consideration of the layout or interface. Combining visual
communication skills with the interactive communication skills of user
interaction and online branding, graphic designers often work with software
developers and web developers to create both the look and feel of a web site or
software application and enhance the interactive experience of the user or web
site visitor. An important aspect of interface design is icon design.
User experience design
Considers how a user interacts with and
responds to an interface, service or product and adjusts it accordingly.
Printmaking
Printmaking is the process of making
artworks by printing on paper and other materials or surfaces. Except in the case of monotyping, the process is
capable of producing multiples of the same piece, which is called a
print. Each piece is not a copy but an original since it is not a reproduction
of another work of art and is technically known as an impression. Painting or
drawing, on the other hand, create a unique original piece of artwork. Prints are
created from a single original surface, known technically as a matrix. Common
types of matrices include: plates of metal, usually copper or zinc for
engraving or etching; stone, used for
lithography; blocks of wood for woodcuts, linoleum for linocuts and fabric
plates for screen-printing. But there are many other kinds, discussed
below. Works printed from a single plate create an edition, in modern times
usually each signed and numbered to form a limited edition. Prints may also be
published in book form, as artist's books. A single print could be the product
of one or multiple techniques.
Tools
The mind may be the most important graphic design tool. Aside from
technology, graphic design requires judgment
and creativity. Critical, observational, quantitative and analytic thinking are
required for design layouts and rendering. If the executor is merely
following a solution (e.g. sketch, script or instructions) provided by another
designer (such as an art director), then the executor is not usually considered
the designer.
The method of presentation (e.g.
arrangement, style, medium) may be equally important to the design. The layout
is produced using external traditionalor digital image editing tools. The
appropriate development and presentation tools can substantially change how an
audience perceives a project.
In the mid 1980s, the arrival of desktop publishing and graphic art
software applications introduced a generation of designers to computer image
manipulation and creation that had previously been manually executed. Computer
graphic design enabled designers to instantly see the effects of layout or
typographic changes, and to simulate the effects of traditional media without
requiring a lot of space. However, traditional tools such as pencils ormarkers
are useful even when computers are used for finalization; a designer or art
director may hand sketch numerous concepts as part of the creative process.
Some of these sketches may even be shown to a client for early stage approval,
before the designer develops the idea further using a computer and graphic design
software tools.
Computers are considered an
indispensable tool in the graphic design industry. Computers and software
applications are generally seen by creative professionals as more effective
productiontools than traditional methods. However, some designers continue to
use manual and traditional tools for production, such as Milton Glaser.
New ideas can come by way of
experimenting with tools and methods. Some designers explore ideas using pencil
and paper. Others use many different mark-making tools and resources from
computers to sculpture as a means of inspiring creativity. One of the key
features of graphic design is that it makes a tool out of appropriate image
selection in order to possibly convey meaning.
Computers and the creative process
there is some debate wheter
computers enhance the creative process of graphic design. Rapid production from
the computer allows many designers to explore multiple ideas quickly with more
detail than what could be achieved by traditional hand-rendering or paste-up on
paper, moving the designer through the creative process more quickly. However,
being faced with limitless choices does not help isolate the best design
solution and can lead to endless iterations with no clear design outcome.
A graphic designer may use
sketches to explore multiple or complex ideas quickly without the distractions
and complications of software. Hand-rendered comps are often used to get
approval for an idea execution before a designer invests time to produce
finished visuals on a computer or in paste-up. The same thumbnail sketches or
rough drafts on paper may be used to rapidly refine and produce the idea on the
computer in a hybrid process. This hybrid process is especially useful in logo
design where a software learning curve may detract from a creative thought
process. The traditional-design/computer-production hybrid process may be used
for freeing one's creativity in page layout or image development as well. In
the early days of computer publishing, many "traditional" graphic
designers relied on computer-savvy production artists to produce their ideas
from sketches, without needing to learn the computer skills themselves.
However, this practice has been increasingly less common since the advent of
desktop publishing over 30 years ago. The use of computers and graphics
software is now taught in most graphic design courses.
Nearly all of the popular and "industry
standard" software programs used for graphic design since the early 1990s
are products of Adobe Systems Incorporated. They are Adobe Photoshop (a
raster-based program for photo editing), Adobe Illustrator (a vector-based
program for drawing), Adobe InDesign (a page layout program), and Adobe
Dreamweaver (for Web page design). Another major page layout tool is
QuarkXpress (a product of Quark, Inc., a separate company from Adobe). Both
QuarkXpress and Adobe InDesign are often used in the final stage of the
electronic design process. Raster images may have been edited in Adobe
Photoshop, logos and illustrations in Adobe Illustrator, and the final product
assembled in one of the major page layout programs. Most graphic designers
entering the field since about 1990 are expected to be proficient in at least
one or two of these programs.
Occupations
Graphic design career paths cover
all ends of the creative spectrum and often overlap. The main job
responsibility of a Graphic Designer is the arrangement of visual elements in
some type of media. The main job titles within the industry can vary and are often
country specific. They can include graphic designer, art director, creative
director, and the entry level production artist. Depending on the industry
served, the responsibilities may have different titles such as "DTP
Associate" or "Graphic Artist", but despite changes in title,
graphic design principles remain consistent. The responsibilities may come
from, or lead to, specialized skills such as illustration, photography or
interactive design. Today's graduating graphic design students are normally exposed
to all of these areas of graphic design and urged to become familiar with all
of them as well in order to be competitive.
Graphic designers can work in a
variety of environments. Whilst many will work within companies devoted
specifically to the industry, such as design consultancies or branding
agencies, others may work within publishing, marketing or other communications
companies. Increasingly, especially since the introduction of personal
computers to the industry, many graphic designers have found themselves working
within non-design oriented organizations, as in-house designers. Graphic
designers may also work as free-lance designers, working on their own terms,
prices, ideas, etc.
A graphic designer reports to the
art director, creative director or senior media creative. As a designer becomes
more senior, they may spend less time designing media and more time leading and
directing other designers on broader creative activities, such as brand
development and corporate identity development. They are often expected to
interact more directly with clients, for example taking and interpreting
briefs.
Web design
Web design encompasses many
different skills and disciplines in the production and maintenance of websites.
The different areas of web design include web graphic design; interface design;
authoring, including standardised code and proprietary software; user
experience design; and search engine optimization. Often many individuals will
work in teams covering different aspects of the design process, although some
designers will cover them all. The term web design is normally used to describe
the design process relating to the front-end (client side) design of a website
including writing mark up, but this is a grey area as this is also covered by web
development. Web designers are expected to have an awareness of usability and
if their role involves creating mark up then they are also expected to be up to
date with web accessibility guidelines.
History (1988-2001)
Although web design has a fairly recent
history, it can be linked to other areas such as graphic design. However web
design is also seen as a technological standpoint. It has become a large part
of people's everyday lives. It is hard to imagine the Internet without animated
graphics, different styles of typography, background and music.
The start of the web and web design
In 1989, whilst working at CERN
Tim Berners-Lee proposed to create a global hypertext project, which later
became known as the World Wide Web. Throughout 1991 to 1993 the World Wide Web
was born. Text only pages could be viewed using a simple line-mode browser. In
1993 Marc Andreessen and Eric Bina, created the Mosaic browser. At the time
there were multiple browsers however the majority of them were Unix-based and
were naturally text heavy. There had been no integrated approach to graphical
design elements such as images or sounds. The Mosaic browser broke this mould.
The W3C was created in October 1994, to "lead the World Wide Web to its
full potential by developing common protocols that promote its evolution and
ensure its interoperability." This discouraged any one company from
monopolizing a propriety browser and programming language, which could have
altered the effect of the World Wide Web as a whole. The W3C continues to set
standards, which can today be seen with JavaScript. In 1994 Andreessen formed
Communications corp. That later became known as Netscape Communications the
Netscape 0.9 browser. Netscape created its own HTML tags without regards to the
traditional standards process. For example Netscape 1.1 included tags for
changing background colours and formatting text with tables on web pages.
Throughout 1996 to 1999 the browser wars began. The browser wars saw Microsoft
and Netscape battle it out for the ultimate browser dominance. During this time
there were many new technologies in the field, notably Cascading Style Sheets,
JavaScript, and Dynamic HTML. On a whole the browser competition did lead to
many positive creations and helped web design evolve at a rapid pace.
Evolution of web design
In 1996, Microsoft released its
first competitive browser, which was complete with its own features and tags.
It was also the first browser to support style sheets, which at the time was
seen as an obscure authoring technique. The HTML markup for tables was
originally intended for displaying tabular data. However designers quickly
realized the potential of using HTML tables for creating the complex,
multi-column layouts that were otherwise not possible. At this time, as design
and good aesthetics seemed to take precedence over good mark-up structure, and
little attention was paid to semantics and web accessibility. HTML sites were
limited in their design options, even more so with earlier versions of HTML. To
create complex designs, many web designers had to use complicated table
structures or even use blank spacer .GIF images to stop empty table cells from
collapsing. CSS was introduced in December 1996 by the W3C to support
presentation and layout; this allowed HTML code to be semantic rather than both
semantic and presentational, and improved web accessibility, see tableless web
design. In 1996 Flash (originally known as FutureSplash) was developed. At the
time it was of a very simple layout basic tools and a timeline but it enabled
web designers to go beyond the point of HTML at the time. It has now progressed
to be very powerful, enabling it to develop entire sites.
End of the first browser wars
During 1998 Netscape released
Netscape Communicator code under an open source licence, enabling thousands of
developers to participate in improving the software. However they decided to
stop and start from the beginning, which guided the development of the open
source browser and soon expanded to a complete application platform. The Web
Standards Project was formed, and promoted browser compliance with HTML and CSS
standards by creating Acidl, Acid2, and Acid3 tests. 2000 was a big year for
Microsoft. Internet Explorer had been released for Mac, this was significant as
it was the first browser that fully supported HTML 4.01 and CSS 1, raising the
bar in terms of standards compliance. It was also the first browser to fully
support the PNG image format. During this time Netscape was sold to AOL and
this was seen as Netscape's official loss to Microsoft in the browser wars.
History (2001-2012)
Since the start of the 215`
century the web has become more and more integrated into peoples lives, as this
has happened the technology of the web has also moved on. There have also been
signifigent changes in the way people use and access the web, this has changed
how sites are designed.
The Modern Browsers
Since the end of the browsers
wars there have been new browsers coming onto the scence, many of these are
open source meaning that they tend to have faster development and are more
supportive of new standards. The new options are considered by many to be
better that Microsoft's Internet Explorer.
New Standards
The W3C has released new
standards of HTML (HTM15) and CSS (CSS3), as well as new JavaScript API's each
as a new but individual standard, however while the term HTML5 is only used to
refer to the new version of HTML and some of the JavaScript API's, it has
become common to use it to refer to the entire suite of new standards (HTML5, CSS3
and JavaScript)
Tools and technologies
Web designers use a variety of
different tools depending on what part of the production process they are
involved in. These tools are updated over time by newer standards and software
but the principies behind them remain the same. Web graphic designers use
vector and raster graphics packages for creating web formatted imagery or
design prototypes. Technologies used for creating websites include standardised
mark up which oauld be hand coded or generated by WYSIWYG editing software.
There is also proprietary software based on plug-ins that bypasses the dient's
browsers version, these are often WYSIWYG but with the option of using the
softwareis scripting language. Search engine optimisation tools may be used to
check search engine ranking and suggest improvements.
Other tools web designers might
use include mark up validators and other testing tools for usability and
accessibility to ensure their web sites meet web accessibility guidelines.
Skills and techniques
Typography
Usually a successful website has
only a few typefaces which are of a similar style, instead of using a range of
typefaces. Preferably a website should use sans serif or serif typefaces, not a
combination of the two. Typography in websites should also be careful the
amount of typefaces used, good design will incorporate a few similar typefaces
rather than a range of type faces. Most browsers recognize a specific nurnber
of safe fonts, which designers mainly use in order to avoid complications.
Font downloading was later
included in the CSS3 fonts module, and has since been implemented in Safari
3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased interest
in Web typography, as well as the usage of font downloading.
Most layouts on a site incorporate
white spaces to break the text up into paragraphs and also avoid centre aligned
text.
Page
layout
Web pages should be well laid out
to improve navigation for the user. Also for navigation purposes, the sites
page layout should also remain consistent on different pages. When constructing
sites, it’s important to consider page width as this is vital for aligning
objects and in layout design. The most popular websites generally have a width
close to 1024 pixels. Most pages are also centre aligned, to make objects look
more aesthetically pleasing on larger screens.
Fluid tayouts developed around
2000 as a replacement for HTML-table-based layouts, as a rejection of
grid-based design both as a page layout design principle, and as a coding
technique, but were very slow to be adopted. The axiomatic assumption is that
readers will have screen devices, or windows thereon, of different sizes and
that there is nothing the page designer can do to change this. Accordingly, a
design should be broken down into units (sidebars, content blocks, advert
areas, navigation areas) that are sent to the browser and which will be fitted
into the display window by the browser, as best it can. As the browser does
know the details of the reader's screen (window size, font size relative to
window etc.) the browser does a better job of this than a presumptive designer.
Although such a display may often change the relative position of major content
units, sidebars may be displaced below body text rather than to the side of it,
this is usually a better and particularly a more usable display than a
compromise attempt to display a hard-coded grid that simply doesn't fit the
device window. In particular, the relative position of content blocks may
change, but each block is less affected. Usability is also better, particularly
by the avoidance of horizontal scrolling.
Responsive Web Design is a new
approach, based on CSS3, and a deeper level of per-device specification within
the page's stylesheet, through an enhanced use of the CSS @media
pseudo-selector.
Quality of code
When creating a site it is good practice to
conform to standards. This is usually done via a description specifying what
the element is doing. Not conforming to standards may not make a website
unusable or error prone, standards can relate to the correct layout of pages
for readability as well making sure coded elements are closed appropriately.
This includes errors in code, better layout for code as well as making sure
your IDs and classes are identified properly. Poorly-coded pages are sometimes
colloquially called tag soup. Validating via W3C can only be done when a
correct DOCTYPE declaration is made, which is used to highlight errors in code.
The system identifies the errors and areas that do not conform to web design
standards. This inforrnation can then be corrected by the user.
Visual
design
Good visual design on a website
identifies and works for its target market. This can be an age group or
particular strand of culture thus the designer shoutd understand the trends of
its audience. Designers should also understand the type of website they are
designing, meaning a business website should not be designed the same as a
social media site for example. Designers should also understand the owner or
business the site is representing, to make sure they are portrayed favourably.
The aesthetics or overall design of a site should not clash with the content,
making it easier for the user to navigate and can find the desired information
or products etc.
User experience design
For a user to understand a website they must
be able to understand how the website works. This affects their experience.
User experience is related to layout, clear instructions and labelling on a
website. The user must understand how they can interact on a site. ln relation
to continued use, a user must perceive the usefulness of that website if they
are to continue using it. With users who are skilled and well versed with
website use, this influence relates directly to how they perceive websites, which
encourages further use. Therefore users with less experience are tess likely to
see the advantages or usefulness of websites. This in turn should focus, on
design for a more universal use and ease of access to accommodate as many users
as possible regardless of user skill.
Occupations
There are two primary jobs involved in
creating a website: the web designer and web developer, who often work closely
together on a website. The web designers are responsible for the visuat aspect,
which includes the layout, colouring and typography of a web page. A web
designer will also have a working knowledge of using a variety of languages
such as HTML, CSS, JavaScript, PHP and Flash to create a site, although the
extent of their knowiedge will differ from one web ciesigner to another.
Particularly in smaller organizations one person will need the necessary skills
for designing and programming the full web page, whilst larger organizations
may have a web designer responsible for the visual aspect alone.
Further jobs, which under
particular circumstances may become involved during the creation of a website
include:
•
Graphic designers, to create visuals for the
site such as logos, layouts and buttons
·
Internet marketing specialists, to help maintain
web presence through strategic solutions on targeting viewers to the site, by
using marketing and promotional techniques on the internet.
·
SEO writers, to research and recommend the
correct words to be incorporated into a particular website and make the website
more accessible and found on numerous search engines.
·
Internet copywriter, to create the written
content of the page to appeal to the targeted viewers of the site.
·
User
experience (UX) designer, incorporates aspects of user focused design
considerations which include information architecture, user centred design,
user testing, interaction design, and occasionally visual design.
Multimedia
Multimedia is media and content
that uses a combination of different content forms. This contrasts with media
that use only rudimentary computer displays such as text-only or traditional
forms of printed or hand-produced material. Multimedia includes a combination
of text, audio, still images, animation, video, or interactivity content forms.
Multimedia is usually recorded
and played, displayed, or accessed by information content processing devices,
such as computerized and electronic devices, but can also be part of a live
performance. Multimedia devices are electronic media devices used to store and
experience multimedia content. Multimedia is distinguished from mixed media in
fine art; by including audio, for example, it has a broader scope. The term "rich
media" is synonymous for interactive multimedia. Hypermedia can be
considered one particular multimectia application.
Categorization of multimedia
Multimedia may be broadly divided
into linear and non-linear categories. Linear active content progresses often
without any navigational control for the viewer such as a cinema presentation.
Non-linear uses interactivity to control progress as with a video game or
self-paced computer based training. Hypermedia is an example of non-linear
content.
Multimedia presentations can be live or
recorded. A recorded presentation may allow interactivity via a navigation
system. A live multimedia presentation may allow interactivity via an
interaction with the presenter or performer.
Major characteristics of
multimedia
Multimedia presentations may be viewed by
person on stage, projected, transmitted, or played locally with a media player.
A broadcast may be a live or recorded multimedia presentation. Broadcasts and
recordings can be either analog or digital electronic media technology. Digital
online multimedia may be downloaded or streamed. Streaming multimedia may be
live or on-demand.
Multimedia games and simulations
may be used in a physical environment with special effects, with multiple users
in an online network, or locally with an offline computer, game system,
orsimuiator.
The various formats of
technological or digital multimedia may be intended to enhance the users'
experience, for example to make it easier and faster to convey information. Or
in entertainment or art, to transcend everyday experience.
Enhanced levels of interactivity are made
possible by combining multiple forms of media content. Online multimedia is
increasingly becoming object-oriented and data-driven, enabling applications
with collaborative end-user innovation and personalization on multiple forms of
content over time. Examples of these range from multiple forms of content on
Web sites like photo galleries with both images (pictures) and title (text)
user-updated, to simulations whose co-efficients, events, illustrations,
animations or videos are modifiable, allowing the multimedia
"experience" to be altered without reprogramming. In addition to
seeing and hearing, Haptic technology enables virtual objects to be felt.
Emerging technology involving illusions of taste and smell may also enhance the
multimedia experience.
Terminology
History of the term
The term multimedia was coined by
singer and artist Bob Goldstein (later 'Bobb Goldsteinn') to promote the July
1966 opening of his "LightWorks at L'Oursin" show at Southampton,
Long Island. Goldstein was perhaps aware of a British artist named Dick
Higgins, who had two years previously discussed a new approach to art-making he
called "intermedia."
On August 10, 1966, Richard
Albarino of Variety borrowed the terminology, reporting: "Brainchild of
songscribe-comic Bob (`Washington Square') Goldstein, the `Lightworks' is the
latest multi-media music-cum-visuals to debut as discotheque fare." Two
years later, in 1968, the term "multimedia" was re-appropriated to
describe the work of a political consultant, David Sawyer, the husband of Iris
Sawyer—one of Goldstein's producers at L'Oursin.
In the intervening forty years,
the word has taken on different meanings. In the late 1970s, the term referred to
presentations consisting of multi-projector slide shows timed to an audio
track. However, by the 1990s 'multimedia' took on its current meaning.
In the 1993 first edition of
McGraw-Hill's Multimedia: Making It Work, Tay Vaughan declared "Multimedia
is any combination of text, graphic art, sound, animation, and video that is
delivered by computer. When you allow the user—the viewer of the project — to
control what and when these elements are delivered, it is interactive
multimedia. When you provide a structure of linked elements through which the
user can navigate, interactive multimedia becomes hypermedia."
The German language society, Gesellschaft fiir
deutsche Sprache, decided to recognize the word's significance and
ubiquitousness in the 1990s by awarding it the title of 'Word of the Year' in
1995. The institute summed up its rationale by stating "[Multimedia] has
become a central word in the wonderful new media world"
In common usage, multimedia
refers to an electronically delivered combination of media including video,
still images, audio, text in such a way that can be accessed interactively.
Much of the content on the web today falls within this definition as understood
by millions. Some computers which were marketed in the 1990s were called "multimedia"
computers because they incorporated a CD-ROM drive, which allowed for the
delivery of several hundred megabytes of video, picture, and audio data. That
era saw also a boost in the production of educational multimedia application
CD-ROMs.
Word usage and context
Since media is the plural of
medium, the term "multimedia" is used to describe multiple
occurrences of only one form of media such as a collection of audio CDs. This
is why it's important that the word "multimedia" is used exclusively
to describe multiple forms of media and content.
The term "multimedia" is also
ambiguous. Static content (such as a paper book) may be considered multimedia
if it contains both pictures and text or may be considered interactive if the
user interacts by turning pages at will. Books may also be considered
non-linear if the pages are accessed non-sequentially. The term
"video", if not used exclusively to describe motion photography, is
ambiguous in multimedia terminology. Video is often used to describe the file
format, delivery format, or presentation format instead of "footage"
which is used to distinguish motion photography from "animation" of
rendered motion imagery. Multiple forms of information content are often not
considered modern forms of presentation such as audio or video. Likewise,
single forms of information content with single methods of information
processing (e.g. non-interactive audio) are often called multimedia, perhaps to
distinguish static media from active media. In the Fine arts, for example, Leda
Luss Luyken's ModulArt brings two key elements of musical composition and film
into the world of painting: variation of a theme and movement of and within a
picture, makingModu/Art an interactive multimedia form of art. Performing arts
may also be considered multimedia considering that performers and props are
multiple forms of both content and media.
Usage / Application
Multimedia finds its application
in various areas including, but not limited to, advertisements, art, education,
entertainment, engineering, medicine,mathematics, business, scienti fic
research and spatial temporal applications. Several examples are as follows:
Creative industries
Creative industries use
multimedia for a variety of purposes ranging from fine arts, to entertainment,
to commercial art, to journalism, to media and software services provided for
any of the industries listed below. An individual multimedia designer may cover
the spectrum throughout their career. Request for their skills range from
technical, to analytical, to creative.
Commercial uses
Much of the electronic old and
new media used by commercial artists is multimedia. Exciting presentations are
used to grab and keep attention inadvertising. Business to business, and
interoffice communications are often developed by creative services firms for
advanced multimedia presentations beyond simple slide shows to sell ideas or
liven-up training. Commercial multimedia developers may be hired to design for
governmental services andnonprofit services applications as well.
Entertainment and fine arts
In addition, multimedia is
heavily used in the entertainment industry, especially to develop special
effects in movies and animations. Multimedia games are a popular pastime and
are software programs available either as CD-ROMs or online. Some video games also
use multimedia features. Multimedia applications that allow users to actively
participate instead of just sitting by as passive recipients of information are
called Interactive Multimedia. In theArts there are multimedia artists, whose
minds are able to blend techniques using different media that in some way
incorporates interaction with the viewer. One of the most relevant could be
Peter Greenaway who is melding Cinema with Opera and all sorts of digital
media. Another approach entails the creation of multimedia that can be
displayed in a traditional fine arts arena, such as an art gallery. Although
multimedia display material may be volatile, the survivability of the content
is as strong as any traditional media. Digital recording material may be just
as durable and infinitely reproducible with perfect copies every time.
Education
In Education, multimedia is used
to produce computer-based training courses (popularly called CBTs) and
reference books like encyclopedia and almanacs. A CBT lets the user go through
a series of presentations, text about a particular topic, and associated
illustrations in various information formats. Edutainment is the combination of
education with entertainment, especially multimedia entertainment.
Learning theory in the past
decade has expanded dramatically because of the introduction of multimedia.
Several lines of research have evolved (e.g.Cognitive load, Multimedia
learning, and the list goes on). The possibilities for learning and instruction
are nearly endless.
The idea of media convergence is
also becoming a major factor in education, particularly higher education.
Defined as separate technologies such as voice (and telephony features), data
(and productivity applications) and video that now share resources and interact
with each other, synergistically creating new efficiencies, media convergence
is rapidly changing the curriculum in universities all over the world.
Likewise, it is changing the availability, or lack thereof, of jobs requiring
this savvy technological skill.
The English education in middle
school in China is well invested and assisted with various equipments. In
contrast, the original objective has not been achieved at the desired effect.
The government, schools, families, and students spend a lot of time working on
improving scores, but hardly gain practical skills. English education today has
gone into the vicious circle. Educators need to consider how to perfect the
education system to improve students' practical ability of English. Therefore an
efficient way should be used to make the class vivid. Multimedia teaching will
bring students into a class where they can interact with the teacher and the
subject. Multimedia teaching is more intuitive than old ways; teachers can
simulate situations in real life. In many circumstances teachers don't have to
be there, students will learn by themselves in the class. More importantly,
teachers will have more approaches to stimulating students' passion of learning
Journalism
Newspaper companies all over are
also trying to embrace the new phenomenon by implementing its practices in
their work. While some have been slow to come around, other major newspapers
like The New York Times, USA Today and The Washington Post are setting the
precedent for the positioning of the newspaper industry in a globalized world.
News reporting is not limited to traditional
media outlets. Freelance journalists can make use of different new media to
produce multimedia pieces for their news stories. It engages global audiences
and tells stories with technology, which develops new communication techniques
for both media producers and consumers. Common Language Project is an example
of this type of multimedia journalism production.
Multimedia reporters who are
mobile (usually driving around a community with cameras, audio and video
recorders, and wifi-equipped laptop computers) are often referred to as Mojos,
from mobilejournalist.
Engineering
Software engineers may use
multimedia in Computer Simulations for anything from entertainment to training
such as military or industrial training. Multimedia for software interfaces are
often done as a collaboration between creative professionals and software
engineers.
Industry
In the Industrial sector, multimedia is used
as a way to help present information to shareholders, superiors and coworkers.
Multimedia is also helpful for providing employee training, advertising and
selling products all over the world via virtually unlimited web-based
technology
Mathematical and scientific research
In mathematical and scientific research,
multimedia is mainly used for modeling and simulation. For example, a scientist
can look at a molecular model of a particular substance and manipulate it to
arrive at a new substance. Representative research can be found in journals
such as the Journal of Multimedia.
Medicine
In Medicine, doctors can get
trained by looking at a virtual surgery or they can simulate how the human body
is affected by diseases spread by viruses and bacteria and then develop
techniques to prevent it.
Document imaging
Document imaging is a technique
that takes hard copy of an image/document and converts it into a digital format
(for example, scanners).
Disabilities
Ability Media allows those with disabilities to gain qualifications in the multimedia field so they can pursue careers that give them access to a wide array of powerful communication forms.
Ability Media allows those with disabilities to gain qualifications in the multimedia field so they can pursue careers that give them access to a wide array of powerful communication forms.
Miscellaneous
In Europe, the reference organisation for Multimedia industry is the European Multimedia Associations Convention (EMMAC).
In Europe, the reference organisation for Multimedia industry is the European Multimedia Associations Convention (EMMAC).
ART 4
"Programming"
Programming
Computer programming (often shortened to
programming, scripting, or coding) is the process of designing, writing,
testing, debugging, and maintaining the source code of computer programs. This
source code is written in one or more programming languages (such as Java, C++,
C#,Python, etc.). The purpose of programming is to create a set of instructions
that computers use to perform specific operations or to exhibit desired
behaviors. The process of writing source code often requires expertise in many
different subjects, including knowledge of the application domain, specialized
algorithms and formal logic. Within software engineering, programming (the
implementation) is regarded as one phase in a software development process.
There is an ongoing debate on the extent to
which the writing of programs is an art form, a craft, or an engineering
discipline. In general, good programming is considered to be the measured
application of all three, with the goal of producing an efficient and evolvable
software solution (the criteria for "efficient" and
"evolvable" vary considerably). The discipline differs from many
other technical professions in that programmers, in general, do not need to be
licensed or pass any standardized (or governmentally regulated) certification
tests in order to call themselves "programmers" or even
"software engineers." Because the discipline covers many areas, which
may or may not include critical applications, it is debatable whether licensing
is required for the profession as a whole. in most cases, the discipline is
self-governed by the entities which require the programming, and sometimes very
strict environments are defined (e.g. United States Air Force use of AdaCore
and security clearance). However, representing oneself as a "Professional
Software Engineer" without a license from an accredited institution is
illegal in many parts of the world.
Another ongoing
debate is the extent to which the programming language used in writing computer
programs affects the form that the final program takes. This debate is
analogous to that surrounding the Sapir—Whorf hypothesis in linguistics and
cognitive science, which postulates that a particular spoken language's nature
influences the habitual thought of its speakers. Different language patterns yield
different patterns of thought. This idea challenges the possibility of
representing the world perfectly with language, because it acknowledges that
the mechanisms of any language condition the thoughts of its speaker community.
History
Ancient cultures had no conception of
computing beyond simple arithmetic. The only mechanical device that existed for
numerical computation at the beginning of human history was the abacus,
invented in Sumeria circa 2500 BC. Later, the Antikythera mechanism, invented
some time around 100 AD in ancient Greece, was the first mechanical calculator
utilizing gears of various sizes and configuration to perform calculations,
which tracked themetonic cycle still used in lunar-to-solar calendars, and
which is consistent for calculating the dates of the Olympiads. The Kurdish
medieval scientistAI-Jazari built programmable Automata in 1206 AD. One system
employed in these devices was the use of pegs and cams placed into a wooden
drum at specific locations, which would sequentially trigger levers that in
turn operated percussion instruments. The output of this device was a small
drummer playing various rhythms and drum patterns. The Jacquard Loom, which
Joseph Marie Jacquard developed in 1801, uses a series ofpasteboard cards with
holes punched in them. The hole pattern represented the pattern that the loom
had to follow in weaving cloth. The loom could produce entirely different
weaves using different sets of cards. Charles Babbage adopted the use of
punched cards around 1830 to control his Analytical Engine. The first computer
program was written for the Analytical Engine by mathematician Ada Lovelace to
calculate a sequence of Bernoulli Numbers. The synthesis of numerical
calculation, predetermined operation and output, along with a way to organize
and input instructions in a manner relatively easy for humans to conceive and
produce, led to the modern development of computer programming. Development of
computer programming accelerated through the Industrial Revolution.
In the late 1880s, Herman Hollerith invented
the recording of data on a medium that could then be read by a machine. Prior
uses of machine readable media, above, had been for control, not data.
"After some initial trials with paper tape, he settled on punched
cards..." To process these punched cards, first known as "Hollerith
cards" he invented the tabulator, and the keypunch machines. These three
inventions were the foundation of the modern information processing industry.
In 1896 he founded the Tabulating Machine Company (which later became the core
of IBM). The addition of a control panel (plugboard) to his 1906 Type I
Tabulator allowed it to do different jobs without having to be physically
rebuilt. By the late 1940s, there were a variety of control panel programmable
machines, called unit record equipment, to perform data-processing tasks.
The invention
of the von Neumann architecture allowed computer programs to be stored in
computer memory. Early programs had to be painstakingly crafted using the instructions
(elementary operations) of the particular machine, often in binary notation.
Every model of computer would likely use different instructions (machine
language) to do the same task. Later, assembly languages were developed that
let the programmer specify each instruction in a text format, entering
abbreviations for each operation code instead of a number and specifying
addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly
language is usually more convenient, faster, and less prone to human error than
using machine language, but because an assembly language is little more than a
different notation for a machine language, any two machines with different
instruction sets also have different assembly languages.
In 1954, FORTRAN was invented; it was the
first high level programming language to have a functional implementation, as
opposed to just a design on paper. (A high-level language is, in very general
terms, any programming language that allows the programmer to write programs in
terms that are more abstract than assembly language instructions, i.e. at a
level of abstraction "higher" than that of an assembly language.) it
allowed programmers to specify calculations by entering a formula directly
(e.g.Y = X*2 + 5*X + 9). The program text, or source, is converted into machine
instructions using a special program called a compiler, which translates the
FORTRAN program into machine language. In fact, the name FORTRAN stands for
"Formula Translation". Many other languages were developed, including
some for commercial programming, such as COBOL. Programs were mostly still
entered using punched cards or paper tape. (See computer programming in the
punch card era). By the late 1960s, data storage devices and computer terminals
became inexpensive enough that programs could be created by typing directly
into the computers. Text editors were developed that allowed changes and
corrections to be made much more easily than with punched cards. (Usually, an
error in punching a card meant that the card had to be discarded and a new one
punched to replace it.)
As time has
progressed, computers have made giant leaps in the area of processing power.
This has brought about newer programming languages that are more abstracted
from the underlying hardware. Popular programming languages of the modern era
include C++, C#, Objective-C, Visual Basic, SQL, HTML with PHP, ActionScript,
Perl, Java, JavaScript, Ruby, Python, Haskell and dozens more. Although these
high-level languages usually incur greater overhead, the increase in speed of
modern computers has made the use of these languages much more practical than
in the past. These increasingly abstracted languages typically are easier to
learn and allow the programmer to develop applications much more efficiently
and with less source code. However, high-level languages are still impractical
for a few programs, such as those where low-level hardware control is necessary
or where maximum processing speed is vital. Computer programming has become a
popular career in the developed world, particularly in theUnited States,
Europe, Scandinavia, and Japan. Due to the high labor cost of programmers in
these countries, some forms of programming have been increasingly subject to
offshore outsourcing (importing software and services from other countries,
usually at a lower wage), making programming career decisions in developed
countries more complicated, while increasing economic opportunities for
programmers in less developed areas, particularly China and India.
Programming
languages
Different
programming languages support different styles of programming (called
programming paradigms). The choice of language used is subject to many
considerations, such as company policy, suitability to task, availability of
third-party packages, or individual preference. Ideally, the programming
language best suited for the task at hand will be selected. Trade-offs from
this ideal involve finding enough programmers who know the language to build a
team, the availability of compilers for that language, and the efficiency with
which programs written in a given language execute. Languages form an
approximate spectrum from "low-level" to "high-level";
"low-level" languages are typically more machine-oriented and faster
to execute, whereas "high-level" languages are more abstract and
easier to use but execute less quickly. It is usually easier to code in
"high-level" languages than in "low-level" ones.
Allen Downey,
in his book How To Think Like A Computer Scientist, writes: The details look
different in different languages, but a few basic instructions appear in just
about every language:
• input:
Gather data from the keyboard, a file, or some other device.
• output:
Display data on the screen or send data to a file or other device.
• arithmetic:
Perform basic arithmetical operations like addition and multiplication.
• conditional
execution: Check for certain conditions and execute the appropriate sequence of
statements.
• repetition:
Perform some action repeatedly, usually with some variation.
Many computer
languages provide a mechanism to call functions provided by libraries such as
in a .so. Provided the functions in a library follow the appropriate run time
conventions (e.g., method of passing arguments), then these functions may be
written in any other language.
Programmers
Computer programmers are those who write
computer software. Their jobs usually involve:
• Coding
• Compilation
• Debugging
•
Documentation
• Integration
• Maintenance
• Requirements
analysis
• Software
architecture
• Software
testing
•
Specification
PA T
"Computer Tomorrow"
Electronic Communication Network
An electronic communication
network (ECN) is a financial term for a type of computer system that
facilitates trading of financial products outside of stock exchanges. The
primary products that are traded on ECNs are stocks and currencies. The first
ECN, Instinet, was created in 1969. ECNs increase competition among trading
firms by lowering transaction costs, giving clients full access to their order
books, and offering order matching outside of traditional exchange hours. ECNs
are sometimes also referred to as Alternative Trading Systems or Alternative
Trading Networks.
Function
To trade with an ECN, one must be
a subscriber or have an account with a broker that provides direct access
trading. ECN subscribers can enter orders into the ECN via a custom computer
terminal or network protocols. The ECN will then match contra-side orders (i.e.
a sell-order is "contra-side" to a buy-order with the same price and
share count) for execution. The ECN will post unmatched orders on the system
for other subscribers to view. Generally, the buyer and seller are anonymous,
with the trade execution reports listing the ECN as the party.
Some ECN brokers may offer
additional features to subscribers such as negotiation, reserve size, and
pegging, and may have access to the entire ECN book (as opposed to the
"top of the book") that real-time market data regarding depth of
trading interest.
ECNs are generally facilitated by
electronic negotiation, a type of communication between agents that allows
cooperative and competitive sharing of information to determine a proper price.
Negotiation types
The most common paradigm is the
electronic auction type. As of 2005, most e-business negotiation systems can
only support price negotiations. Traditional negotiations typically include
discussion of other attributes of a deal, such as delivery terms or payment conditions.
This one-dimensional approach is one of the reasons why electronic markets
struggle for acceptance. Multiattributive and combinatorial auction mechanisms
are emerging to allow further types of negotiation.
Support for complex multi-attribute negotiations
is a critical success factor for the next generation of electronic markets and,
more generally, for all types of electronic exchanges. This is what the second
type of Electronic negotiation, namely Negotiation Support, addresses. While
auctions are essentially mechanisms, bargaining is often the only choice in
complex cases or those cases where no choice of partners is given. Bargaining
is a hard, error-prone, ambiguous task often performed under time pressure.
Information technology has some potential to facilitate negotiation processes
which is analyzed in research projects/prototypes such as INSPIRE, Negoisst or
WebNS.
The third type of negotiation is automated
argumentation, where agents exchange not only values, but also arguments for
their offers/counter-offers. This requires agents to be able to reason about
the mental states of other market participants.
Technologies
One research area that has paid
particular attention to modeling automated negotiations is that of autonomous
agents. If negotiations occur frequently, possibly on a minute per minute basis
in order
to schedule network capacity, or
negotiation topics can be clearly defined it may be desirable to automate this
coordination effort.
Automated negotiation is a key
form of interaction in complex systems composed of autonomous agents.
Negotiation is a process of making offers and counteroffers, with the aim of
finding an acceptable agreement. During negotiation, each offer is based on its
own utility and expectation of what other. This means that a multi criteria
decision making is need to be taken for each offer.
In the stock market
For stock trading, ECNs exist as
a class of SEC-permitted Alternative Trading Systems (ATS). As an ATS, ECNs
exclude broker-dealers' internal crossing networks — i.e., systems that match
orders in private using prices from a public exchange.
Fee structure
ECN's fee structure can be
grouped in two basic structures: a classic structure and a credit (or rebate)
structure. Both fee structures offer advantages of their own. The classic
structure tends to attract liquidity removers while the credit structure
appeals to liquidity providers. However since both removers and providers of
liquidity are necessary to create a market, ECNs must choose their fee structures
carefully.
In a credit structure ECNs make a
profit from paying liquidity providers a credit while charging a debit to
liquidity removers. Credits range from $0.002 to $0.00295 per share for
liquidity providers, and debits from $0.0025 to $0.003 per share for liquidity
removers. The fee can be determined by monthly volume provided and removed, or
by a fixed structure, depending on the ECN. This structure is common on the
NASDAQ market. NASDAQ Price List. Traders commonly quote the fees in millicents
or mils (e.g. $0.00295 is 29.5 mils).
In a classic structure, the ECN will charge a
small fee to all market participants using their network, both liquidity
providers and removers. They also can attract volume to their networks by
giving lower prices to large liquidity providers. Fees for ECNs that operate
under a classic structure range from $0 to $0.0015, or even higher depending on
each ECN. This fee structure is more common in the NYSE, however recently some
ECNs have moved their NYSE operations into a credit structure.
Currency trading
The first ECN for Internet
currency trading was New-York based Matchbook FX formed in 1999. Back then, all
the prices were created & supplied by Matchbook FX's traders/users,
including banks, within its ECN network. This was quite unique at the time, as
it empowered buy-side FX market participants, historically always "price
takers", to finally be price makers as well. Today, FX ECNs like Currenex,
Bloomberg Tradebook (an affiliate of Bloomberg L.P.), Hotspot FX, 360T, FXall
& BAXTER Financial Services Ltd with Currency Dealing provide access to an
electronic trading network, supplied with streaming quotes from the top tier
banks in the world. Their matching engines perform limit checks and match
orders, usually in less than 100 milliseconds per order. The matching is quote
driven and these are the prices that match against all orders. Spreads are
discretionary but in general multibank competition creates 1-2 pip spreads on
USD Majors and Euro Crosses. The order book is not a routing system that sends
orders to individual market makers. It is a live exchange type book working
against the best bid/offer of all quotes. By trading through an ECN, a currency
trader generally benefits from greater price transparency, faster processing,
increased liquidity and more availability in the marketplace. Banks also reduce
their costs as there is less manual effort involved in using an ECN for
trading.
History
One of the key developments in the history of
ECNs was the NASDAQ over-the-counter quotation system. NASDAQ was created
following a 1969 American Stock Exchange study which estimated that errors in
the processing of hand-written securities orders cost brokerage firms
approximately $100 million per year. The NASDAQ system automated such order
processing and provided brokers with the latest competitive price quotes via a
computer terminal. In March 1994, a study by two economists, William Christie
and Paul Schultz, noted that NASDAQ hid-ask spreads were larger than was
statistically likely, indicating "We are unable to envision any scenario
in which 40 to 60 dealers who are competing for order flow would simultaneously
and consistently avoid using odd-eighth quotes without an implicit agreement to
post quotes only on the even price fractions. However, our data do not provide
direct evidence of tacit collusion among NASDAQ market makers." These
results led to an antitrust lawsuit being filed against NASDAQ. As part of
NASDAQ's settlement of the antitrust charges, NASDAQ adopted new order handling
rules that integrated ECNs into the NASDAQ system. Shortly after this
settlement, the SEC adopted Regulation ATS, which permitted ECNs the option of
registering as stock exchanges or else being regulated under a separate set of
standards for ECNs.
At that time major ECNs that became active
were Instinet and Island (part of instinet was spun off, merged with Island
into met, and acquired by NASDAQ), Archipelago Exchange (which was acquired by
the NYSE) and Brut (now acquired by NASDAQ).
ECNs enjoyed a resurgence after
the adoption of SEC Regulation NMS, which required "trade through"
protection of orders in the market, regardless of where those orders are
placed.
Remember when the Internet was
known as the "information super highway"? Currently, the Internet is
often called Web 2.0. And who knows what we'll call it a few years from now.
Though relatively young in the grand scheme of things, the Internet has evolved
rapidly. Today's Internet is a far cry from yesteryear, and though its future has
yet to materialize, one thing is certain: Tomorrow's Internet will be yet
another incarnation.
Internet Issues
The Internet's Past
The Internet is a child of the 1960s, with its
roots dating back to 1969 when the first network of computers, ARPANET,
communicated with one another. It took a full decade before the Internet
Protocol was developed. In 1984, the domain name system was created, bringing
with it the familiar suffixes of .com and .org.
Still primarily academic, the
Internet wasn't widely used until the 1990s when two significant developments
arrived. In 1991, the World Wide Web was ushered in. Hyperlinks made navigation
much easier than in the past. And in 1993, the first Web browser, Mosaic,
arrived, making for a graphical user experience. By the mid-1990s, an estimated
45 million users were using the Internet. By 2000, that number exploded to over
400 million. The Internet was officially here to stay.
In its early incarnation, users
connected to the Internet primarily through dial-up networking which consisted
of a modem and a phone line. Users would connect, search for information, check
email messages, and then disconnect once these tasks were complete thus freeing
the phone line for traditional phone calls.
The Internet Today
Today the Internet isn't a
side activity; it's a main attraction. High-speed, broadband connections have
largely replaced dial-up networking. Now, many computer users are connected to
the Internet around the clock. In addition, mobile phones and other devices such
as PDAs and gaming consoles now connect to the Internet.
While yesterday's websites were
static, today's sites are dynamic. It is a social medium where users are
engaged. We shop online, we bank online, we play games online, we read the news
online, we listen to music on line, we make phone calls online, we watch TV and
movies online, we connect with other users online, we create our own media
online, we do business online, and the list goes on. The Internet has affected
nearly everything that we do.
In the past, we purchased music and software
on CDs. Today, many users buy music downloads while others subscribe to
unlimited streaming music subscriptions. Software is now available as a service
"in the cloud." Rather than buying a disc, installing the software,
and owning it outright, software can be accessed online via a monthly
subscription.
Along with the advances made comes a darker
side: computer viruses, spyware, and privacy concerns. Hackers and malware
developers are running rampant, fueling a cat-and-mouse game between the black
hatters and computer security experts. In addition privacy concerns have been
raised. Not only does malware threaten privacy, some users willingly and
unwittingly give up personal information online over social networks and some
people are concerned about the potential for government monitoring.
The Internet's Future
If you could gaze into a crystal ball and see
the Internet in the future, what would it look like? No one knows for sure, but
we can speculate. With the popularity of mobile devices such as the iPad, cell
phones, and eBook readers, it's likely that the Internet will continue to
spread into other areas of our lives. Touch screens and voice recognition
technologies may render the keyboard and mouse obsolete. It's also likely that
more content will be delivered via the Internet than over traditional media
such as radio, television, print, and CDs. Cloud computing may also become more
prevalent.
The Internet has been fascinating
the world on a grand scale for nearly two decades. It is sure to continue its
evolution, surprising us with its wonders for decades to come.
Interested in the Internet, VolP, cloud
computing, and virtualization? CBT Planet offers a huge selection of computer
training courses covering all aspects of technology. Whether you want to
specialize in networking or master an emerging technology, CBT Planet offers
live and self-paced instructor-led training designed with your learning needs
in mind. Numerous formats are offered including CBT training, online training
videos, distance learning, onsite training, and IT boot camps.
New
Technology
Windows 8
Windows 8 is the current release
of the Windows operating system, produced by Microsoft for use on personal
computers, including home and business desktops, laptops, tablets, and home
theater PCs. Development of Windows 8 started before the release of its
predecessor in 2009. Its existence was first
announced
at CES 2011, and followed by the release of three pre-release versions from
September 2011 to May 2012. The operating system was released to manufacturing
on August 1, 2012, and was released for general availability on October 26,
2012.
Windows 8 introduces significant changes
to the operating system's platform, primarily focused towards improving its user experience on mobile devices
such as tablets to rival other mobile operating systems like Android and i0S, taking advantage of new or emerging
technologies like USB 3.0, UEFI firmware, near field communications,
cloud computing and the low-power ARM architecture, new security features such
as malware filtering, built-in antivirus capabilities, a new installation
process optimized for digital distribution, and support for secure boot (a UEFI
feature which allows operating systems to be digitally signed to prevent
malware from altering the boot process), the ability to synchronize certain
apps and settings between multiple devices, along with other changes and
performance improvements. Windows 8 also introduces a new shell and user
interface based on Microsoft's
"Metro" design language, featuring a new Start screen with a grid of
dynamically updating tiles to represent applications, a new app platform
with an emphasis on touchscreen input, and the newWindows Store to obtain
and/or purchase applications to run on the operating system.
Windows 8 was released to mixed
reception—although reception towards its performance improvements, security
enhancements, and its improved support for touchscreen devices was positive, the new user interface of the operating system has
been widely criticized for being confusing and having a steep learning
curve (especially when used with a keyboard and mouse instead of a
touchscreen). Despite these shortcomings, 40
million Windows 8 licenses were sold during its first month of availability,
mostly to original equipment manufacturers(OEMs).
Software
compatibility
The three desktop editions of Windows 8
are sold in two sub-editions: 32-bit and 64-bit. The 32-bit sub- edition runs on CPUs compatible with x86
architecture 3rd generation (known as IA-32) or newer, and can only run 32-bit
programs. The 64-bit sub-edition runs on CPUs compatible with x86 8th
generation (known as x86-64, or x64) or
newer, and can run 32-bit and 64-bit programs. 32-bit programs and operating
system are restricted to supporting only 4 gigabytes of memory while 64-bit
systems can theoretically support 2048 gigabytes of memory. 64-bit operating
system require a different set of device drivers than those of 32-bit operating
systems.
The 32-bit
edition of Windows 8 is capable of running 16-bit applications, although 16-bit
support must be enabled first. 16-bit applications are
developed for CPUs compatible with x86 2nd generation, first conceived in 1978.
Microsoft started moving away from this architecture since Windows 95.
Windows RT, the only edition of Windows 8 for systems with ARM processors,
only supports applications included with the system (such as a special version
of Office 2013), supplied through Windows Update, or Windows Store apps, to
ensure that the system only runs applications that are optimized for the
architecture. Windows RT does not support running IA-32 or x64 applications. Windows
Store apps can either be cross-compatible between Windows 8 and Windows RT, or
compiled to support a specific architecture.
New and
changed features
New features and functionality in Windows 8 include a faster
startup through UEFI integration and the new "Hybrid Boot" mode
(which hibernates the Windows kernel on shutdown to speed up the subsequent boot), a
new lock screen with a clock and notifications, and the ability for enterprise
users to create live USB versions of Windows
(known as Windows To Go). Windows 8 also adds native support for USB 3.0
devices, which allow for faster data transfers and improved power management
with compatible devices, along with support for near field communicationto
facilitate sharing and communication between devices.
Windows Explorer, which has been renamed
File Explorer, now includes a ribbon in place of the command bar. File
operation dialog boxes have been updated to provide more detailed statistics,
the ability to pause file transfers, and improvements in the ability to manage
conflicts when copying files. A new "File History" function allows
incremental revisions of files to be backed up to and restored from a secondary
storage device, while Storage Spaces allows users to combine different sized
hard disks into virtual drives and specify mirroring, parity, or no redundancy
on a folder-by-folder basis.
Task Manager has also been redesigned,
including a new processes tab with the option to display fewer or more details
of running applications and background processes, a heat map using different
colors indicating the level of resource usage, network and disk counters,
grouping by process type (e.g. applications,
background processes and Windows processes), friendly names for processes and a
new option which allows users to
search the web to find information about obscure processes. Additionally, the
Blue Screen of Death has been updated with a simpler and modern design with
less technical information displayed.
Android (operating system)
Android is a Linux-based operating system designed primarily for touchscreen
mobile devices such as smartphones and tablet
computers. Initially developed by Android, Inc., whom Google financially backed
and later purchased in 2005, Android was unveiled in 2007 along with the
founding of the Open Handset Alliance: a consortium of hardware, software, and
telecommunication companies devoted to advancing open standards for mobile
devices. The first Android-powered phone was sold in October
2008.
Android is
open source and Google releases the code under the Apache License. This open
source code and permissive licensing allows the
software to be freely modified and distributed by device manufacturers,
wireless carriers and enthusiast developers. Additionally, Android has a large
community of developers writing applications ("apps") that extend the
functionality of devices, written primarily in a customized version of the Javaprogramming language. In October 2012,
there were approximately 700,000 apps available for Android, and the estimated
number of applications downloaded from Google Play, Android's primary app
store, was 25 billion.
These factors have allowed Android to become
the world's most widely used smartphone platform and the software of choice for
technology companies who require a low-cost, customizable, lightweight
operating system for high tech devices without developing one from scratch. As
a result, despite being primarily designed for phones and tablets, it has seen
additional applications on televisions, games consoles and other electronics.
Android's open nature has further encouraged a large community of developers
and enthusiasts to use the open source code as a foundation for
community-driven projects, which add new features for advanced users or bring
Android to devices which were officially released running other operating
systems.
Android had a worldwide smartphone
market share of 75% during the third quarter of 2012, with 500 million devices
activated in total and 1.3 million activations per day. The operating system's
success has made it a target for patent litigation as part of the so-called
"smartphone wars" between technology companies.
Description
Interface
Android's user interface is based on
direct manipulation, using touch inputs that loosely correspond to real-world
actions, like swiping, tapping, pinching and reverse pinching to manipulate
on-screen objects. The response to user input is designed to be immediate and
provides a fluid touch interface, often using the vibration capabilities of the
device to provide haptic feedback to the user. Internal hardware such as
accelerometers, gyroscopes and proximity sensors are used by some applications
to respond to additional user actions, for example adjusting the screen from
portrait to landscape depending on how the device is oriented, or allowing the
user to steer a vehicle in a racing game by rotating the device, simulating
control of a steering wheel.
Android devices boot to the homescreen,
the primary navigation and information point on the device, which is similar to
the desktop found on PCs. Android homescreens are typically made up of app
icons and widgets; app icons launch the associated app, whereas widgets display
live, auto-updating content such as the weather forecast, the user's email
inbox, or a news ticker directly on the homescreen. A homescreen may be made up
of several pages that the user can swipe back and forth between, though
Android's homescreen interface is heavily customisable, allowing the user to
adjust the look and feel of the device to their tastes. Third party apps
available on Google Play and other app stores can extensively re-theme the homescreen, and even mimic the look
of other operating systems, such as Windows Phone. Most manufacturers,
and some wireless carriers, customise the look and feel of their Android
devices to differentiate themselves from the competition.
Present along the top of the screen is a
status bar, showing information about the device and its connectivity. This
status bar can be "pulled" down to reveal a notification screen where
apps display important information or updates, such as a newly received email
or SMS text, in a way that doesn't immediately
interrupt or inconvenience the user. In early versions of Android these
notifications could be tapped to open the relevant app, but recent
updates have provided enhanced functionality, such as the ability to call a number
back directly from the missed call notification without having to open the
dialer app first. Notifications are persistent until read or dismissed by the
user.
Applications
Android has a growing selection of third party
applications, which can be acquired by users either through an app store such
as Google Play or the Amazon Appstore, or by downloading and installing the application's APK file from a third-party site. The
Play Store application allows users to browse, download and update apps
published by Google and third-party developers, and is pre-installed on devices
that comply with Google's compatibility requirements. The app filters the list
of available applications to those that are compatible with the user's device,
and developers may restrict their applications to particular carriers or
countries for business reasons. Purchases of unwanted applications can be refunded within 15 minutes of the time of
download, and some carriers offer direct carrier billing for Google Play
application purchases, where the cost of the application is added to the user's
monthly
Applications are developed in the Java
language using the Android software development kit (SDK). The SDK includes a
comprehensive set of development tools, including a debugger, software
libraries, a handset emulator based on QEMU,
documentation, sample code, and tutorials. The officially supported
integrated development environment (IDE) is Eclipse using the Android
Development Tools (ADT) plugin. Other
development tools are available, including a Native Development Kit for
applications or extensions in C or C++, Google App Inventor, a visual
environment for novice programmers, and various cross platform mobile web
applications frameworks.
In order to work around limitations on reaching Google
services due to Internet censorship in the People's
Republic of China, Android devices sold in the PRC are generally customized to
use state approved services instead.
Tidak ada komentar:
Posting Komentar