Pages

Monday, December 1, 2008

DOS

DOS, short for "Disk Operating System",[1] is an acronym for several closely related operating systems that dominated the IBM PC compatible market between 1981 and 1995, or until about 2000 if one includes the partially DOS-based Microsoft Windows versions 95, 98, and Millennium Edition.
Related systems include MS-DOS, PC-DOS, DR-DOS, FreeDOS, GLaDOS, PTS-DOS, ROM-DOS, JM-OS, and several others.
In spite of the common usage, none of these systems were simply named "DOS" (a name given only to an unrelated IBM mainframe operating system in the 1960s). A number of unrelated, non-x86 microcomputer disk operating systems had "DOS" in their name, and are often referred to simply as "DOS" when discussing machines that use them (e.g. AmigaDOS, AMSDOS, ANDOS, Apple DOS, Atari DOS, Commodore DOS, CSI-DOS, ProDOS, and TRS-DOS). While providing many of the same operating system functions for their respective computer systems, programs running under any one of these operating systems would not run under others.

Design
All MS-DOS-type operating systems run on machines with the Intel x86 or compatible CPUs, mainly the IBM PC and compatibles. Machine-dependent versions of MS-DOS were produced for many non-IBM-compatible x86-based machines, with variations from relabelling of the Microsoft distribution under the manufacturer's name, to versions specifically designed to work with non-IBM-PC-compatible hardware. DOS-C's predecessor DOS/NT ran on Motorola 68000 CPU's.
DOS is a single-user, single-task operating system with basic kernel functions that are non-reentrant: only one program at a time can use them. There is an exception with Terminate and Stay Resident (TSR) programs, and some TSRs can allow multitasking. However, there is still a problem with the non-reentrant kernel: once a process calls a service inside of operating system kernel (system call), it must not be interrupted with another process calling system call, until the first call is finished.
The DOS kernel provides various functions for programs (an application program interface), like displaying characters on-screen, reading a character from the keyboard, accessing disk files and more.
DOS by default provides a primitive ability for shell scripting, via batch files (with the filename extension .BAT). These are text files that can be created in any DOS text editor, such as the MS-DOS Editor. They are executed in the same fashion as compiled programs, and run each line of the batch file as a command. Batch files can also make use of several internal commands, such as goto and conditional statements. gosub and simple arithmetic is supported in some third-party shells but can also be faked via strange workarounds; however, no real form of programming is usually enabled.
The operating system offers a hardware abstraction layer that allows development of character-based applications, but not for accessing most of the hardware, such as graphics cards, printers, or mice. This required programmers to access the hardware directly, usually resulting in each application having its own set of device drivers for each hardware peripheral. Hardware manufacturers would release specifications to ensure device drivers for popular applications were available.

Drive naming scheme
In DOS, drives are referred to by identifying letters. Standard practice is to reserve "A" and "B" for floppy drives. On systems with only one floppy drive DOS assigns both letters to the drive, prompting the user to swap disks as programs alternate access between them. This facilitates copying from floppy to floppy or having a program run from one floppy while accessing its data on another. Hard drives were originally assigned the letters "C" and "D". DOS could only support one active partition per drive. As support for more hard drives became available, this developed into first assigning a drive letter to each drive's active primary partition, then making a second pass over the drives to allocate letters to logical drives in the extended partition, then a third pass to give any other non-active primary partitions their names (where such additional partitions existed and contained a DOS-supported file system.) Lastly, DOS allocates letters for optical disc drives, RAM disks, and other hardware. Letter assignments usually occur in the order the drivers are loaded, but the drivers can instruct DOS to assign a different letter; drivers for network drives, for example, typically assign letters nearer the end of the alphabet.
Because DOS applications use these drive letters directly (unlike the /dev directory in Unix-like systems), they can be disrupted by adding new hardware that needs a drive letter. An example is the addition of a new hard drive having a primary partition where a pre-existing hard drive contains logical drives in extended partitions; the new drive will be assigned a letter that was previously assigned to one of the extended partition logical drives. Moreover, even adding a new hard drive having only logical drives in an extended partition would still disrupt the letters of RAM disks and optical drives. This problem persisted through the 9x versions of Windows until NT, which preserves the letters of existing drives until the user changes them.

Boot sequence
The boot sector on PC-compatible computers (MBR) is located at track zero. The boot sector on all disc devices are then in turn loaded into memory segment 0000:7C00, and if the sector contains the values "0x55 0xAA" at position 0x1FE, it's considered to be valid and is executed. On harddiscs each of the four partitions are searched for an active partition (bit-7=1 at pos 0x1BE+0x10*n).
The boot sector code loads the DOS-BIOS into segment 0000:0600; which is located in the file IO.SYS on MS-DOS systems. In some cases the boot sector instead relocates itself into 0000:0600 and loads the partition boot code into 0000:7C00 and executes it.
The DOS-BIOS will then load the DOS kernel, located in MSDOS.SYS on MS-DOS systems. In the DOS-kernel Windows 9x, the DOS-BIOS and kernel are combined in IO.SYS, and MSDOS.SYS is used as a text configuration file.
The kernel then loads the \CONFIG.SYS file to parse configuration parameters. The SHELL variable specifies the location of the shell which defaults to \COMMAND.COM.
The shell is loaded and executed.
The startup batch file AUTOEXEC.BAT is then run by the shell. DR-DOS allows specification of the startup batch file through a parameter in the SHELL statement.:392
The BIOS and kernel files loaded by the boot sector must be contiguous and be the first two directory entries. As such, removing and adding this file is likely to render the media unbootable. It is, however, possible to replace the shell at will, a method that can be used to start the execution of dedicated applications faster.
Variations from MS DOS re-name the BIOS and kernel files, for example, in DR-DOS and PC-DOS IBMBIO.COM is used in place of IO.SYS and IBMDOS.COM in place of MSDOS.SYS. On systems designed for PC-DOS v1.10 the signature 0x55 0xAA at position 0x1FE is not checked.

Saturday, November 15, 2008

Other Animation Techniques

Drawn on film animation: a technique where footage is produced by creating the images directly on film stock, for example by Norman McLaren, Len Lye and Stan Brakhage.

Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil paints on sheets of glass, for example by Aleksandr Petrov.

Erasure animation: a technique using tradition 2D medium, photographed over time as the artist manipulates the image. For example, William Kentridge is famous for his charcoal erasure films.

Pinscreen animation: makes use of a screen filled with movable pins, which can be moved in or out by pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows. The technique has been used to create animated films with a range of textural effects difficult to achieve with traditional cel animation.

Sand animation: sand is moved around on a back- or front-lighted piece of glass to create each frame for an animated film. This creates an interesting effect when animated because of the light contrast.

Flip book: A flip book (sometimes, especially in British English, called a flick book) is a book with a series of pictures that vary gradually from one page to the next, so that when the pages are turned rapidly, the pictures appear to animate by simulating motion or some other change. Flip books are often illustrated books for children, but may also be geared towards adults and employ a series of photographs rather than drawings. Flip books are not always separate books, but may appear as an added feature in ordinary books or magazines, often in the page corners. Software packages and websites are also available that convert digital video files into custom-made flip books.

Other techniques and approaches :
Character animation
Chuckimation
Multi-sketching
Special effects animation
Animatronics
Stop motion

Stop motion

Stop-motion animation is used to describe animation created by physically manipulating real-world objects and photographing them one frame of film at a time to create the illusion of movement. There are many different types of stop-motion animation, usually named after the type of media used to create the animation. Computer software is widely available to create this type of animation.

Puppet animation typically involves stop-motion puppet figures interacting with each other in a constructed environment, in contrast to the real-world interaction in model animation. The puppets generally have an armature inside of them to keep them still and steady as well as constraining them to move at particular joints. Examples include The Tale of the Fox (France, 1937), Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Jiří Trnka and the TV series Robot Chicken (US, 2005–present).
Puppetoon, created using techniques developed by George Pal, are puppet-animated films which typically use a different version of a puppet for different frames, rather than simply manipulating one existing puppet.

Clay animation, or Plasticine animation often abbreviated as claymation, uses figures made of clay or a similar malleable material to create stop-motion animation. The figures may have an armature or wire frame inside of them, similar to the related puppet animation (below), that can be manipulated in order to pose the figures. Alternatively, the figures may be made entirely of clay, such as in the films of Bruce Bickford, where clay creatures morph into a variety of different shapes. Examples of clay-animated works include The Gumby Show (US, 1957–1967) Morph shorts (UK, 1977–2000), Wallace and Gromit shorts (UK, as of 1989), Jan Švankmajer's Dimensions of Dialogue (Czechoslovakia, 1982), The Trap Door (UK, 1984). Films include Wallace and Gromit: Curse of the Were-Rabbit, Chicken Run and The Adventures of Mark Twain

Cutout animation is a type of stop-motion animation produced by moving 2-dimensional pieces of material such as paper or cloth. Examples include Terry Gilliam's animated sequences from Monty Python's Flying Circus (UK, 1969–1974); Fantastic Planet (France/Czechoslovakia, 1973) ; Tale of Tales (Russia, 1979), The pilot episode of the TV series (and sometimes in episodes) of South Park (US, 1997).

Silhouette animation is a variant of cutout animation in which the characters are backlit and only visible as silhouettes. Examples include The Adventures of Prince Achmed (Weimar Republic, 1926) and Princes et princesses (France, 2000).

Model animation refers to stop-motion animation created to interact with and exist as a part of a live-action world. Intercutting, matte effects, and split screens are often employed to blend stop-motion characters or objects with live actors and settings. Examples include the work of Ray Harryhausen, as seen in films such Jason and the Argonauts (1961), and the work of Willis O'Brien on films such as King Kong (1933 film).

Go motion is a variant of model animation which uses various techniques to create motion blur between frames of film, which is not present in traditional stop-motion. The technique was invented by Industrial Light & Magic and Phil Tippett to create special effects scenes for the film The Empire Strikes Back (1980). Another example is Vermithrax from Dragonslayer (1981 film).

Object animation refers to the use of regular inanimate objects in stop-motion animation, as opposed to specially created items.
Graphic animation uses non-drawn flat visual graphic material (photographs, newspaper clippings, magazines, etc.) which are sometimes manipulated frame-by-frame to create movement. At other times, the graphics remain stationary, while the stop-motion camera is moved to create on-screen action.

Pixilation involves the use of live humans as stop motion characters. This allows for a number of surreal effects, including disappearances and reappearances, allowing people to appear to slide across the ground, and other such effects. Examples of pixilation include The Secret Adventures of Tom Thumb and Angry Kid shorts.

Friday, November 14, 2008

Computer Animation

Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer.

2D animation
2D animation figures are created and/or edited on the computer using 2D bitmap graphics or created and edited using 2D vector graphics. This includes automated computerized versions of traditional animation techniques such as of tweening, morphing, onion skinning and interpolated rotoscoping.
Examples: Foster's Home for Imaginary Friends, Danny Phantom, Waltz with Bashir
Analog computer animation
Flash animation
PowerPoint animation

3D animation
3D animation are digitally modeled and manipulated by an animator. In order to manipulate a mesh, it is given a digital skeletal structure that can be used to control the mesh. This process is called rigging. Various other techniques can be applied, such as mathematical functions (ex. gravity, particle simulations), simulated fur or hair, effects such as fire and water and the use of Motion capture to name but a few, these techniques fall under the category of 3d dynamics. Many 3D animations are very believable and are commonly used as Visual effects for recent movies.

Terms
Photo Realistic Animation, is used primarily for animation that attempts to resemble real life, Using advanced rendering that makes detailed skin, plants, water, fire, clouds, etc. to mimic real life. Examples include Up (2009, USA), Kung-Fu Panda, Ice Age (2002, USA).
Cel-shaded animation, is used to mimic traditional animation using CG software. Shading looked stark and less blending colors. Examples include, Skyland (2007, France), Appleseed (2007, Japan), The Legend of Zelda: Wind Waker (2002, Japan)
Motion capture, is used when live action actors wear special suits that allow computers to copy their movements into CG characters. Examples include Polar Express (2004, USA), Beowulf (2007), Disney's A Christmas Carol (2009 USA), Avatar (2009, USA).
2D animation techniques tend to focus on image manipulation while 3D techniques usually build virtual worlds in which characters and objects move and interact. 3D animation can create images that seem real to the viewer.

Wednesday, November 5, 2008

Traditional Animation

Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, which are first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one onto motion picture film against a painted background by a rostrum camera.
The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a computer system. Various software programs are used to color the drawings and simulate camera movement and effects. The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media such as digital video. The "look" of traditional cel animation is still preserved, and the character animators' work has remained essentially the same over the past 70 years. Some animation producers have used the term "tradigital" to describe cel animation which makes extensive use of computer technology.
Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), and Akira (Japan, 1988). Traditional animated films which were produced with the aid of computer technology include The Lion King (US, 1994) Sen to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), Treasure Planet (USA, 2002) and Les Triplettes de Belleville (2003).
Full animation refers to the process of producing high-quality traditionally animated films, which regularly use detailed drawings and plausible movement. Fully animated films can be done in a variety of styles, from more realistically animated works such as those produced by the Walt Disney studio (Beauty and the Beast, Aladdin, Lion King) to the more "cartoony" styles of those produced by the Warner Bros. animation studio. Many of the Disney animated features are examples of full animation, as are non-Disney works such as The Secret of NIMH (US, 1982) and The Iron Giant (US, 1999), Nocturna (Spain, 2007)
Limited animation involves the use of less detailed and/or more stylized drawings and methods of movement. Pioneered by the artists at the American studio United Productions of America, limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing Boing (US, 1951), Yellow Submarine (UK, 1968), and much of the anime produced in Japan. Its primary use, however, has been in producing cost-effective animated content for media such as television (the work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web cartoons). Some examples are; Spongebob Squarepants (USA, 1999–present), The Fairly OddParents (USA, 2001–present) and Invader Zim (USA, 2001–2002, 2006).
Rotoscoping is a technique, patented by Max Fleischer in 1917, where animators trace live-action movement, frame by frame. The source film can be directly copied from actors' outlines into animated drawings, as in The Lord of the Rings (US, 1978), used as a basis and inspiration for character animation, as in most Disney films, or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are: Fire and Ice (USA, 1983) and Heavy Metal (1981).
Live-action/animation is a technique, when combining hand-drawn characters into live action shots. One of the earlier uses of it was Koko the Clown when Koko was drawn over live action footage. Other examples would include Who Framed Roger Rabbit? (USA, 1988), Space Jam (USA, 1996) and Osmosis Jones (USA, 2002).

Monday, November 3, 2008

Animation

Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. The effect is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in several ways. The most common method of presenting animation is as a motion picture or video program, although there are other methods.

Early examples
Early examples of attempts to capture the phenomenon of motion drawing can be found in paleolithic cave paintings, where animals are depicted with multiple legs in superimposed positions, clearly attempting to convey the perception of motion.
A 5,000 year old earthen bowl found in Iran in Shahr-i Sokhta has five images of a goat painted along the sides. This has been claimed to be an example of early animation. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word.
A Chinese zoetrope-type device had been invented in 180 AD. The phenakistoscope, praxinoscope, and the common flip book were early popular animation devices invented during the 19th century.
These devices produced the appearance of movement from sequential drawings using technological means, but animation did not really develop much further until the advent of cinematography.
There is no single person who can be considered the "creator" of film animation, as there were several people ẁorking on projects which could be considered animation at about the same time.
Georges Méliès was a creator of special-effect films; he was generally one of the first people to use animation with his technique. He discovered a technique by accident which was to stop the camera rolling to change something in the scene, and then continue rolling the film. This idea was later known as stop-motion animation. Méliès discovered this technique accidentally when his camera broke down while shooting a bus driving by. When he had fixed the camera, a hearse happened to be passing by just as Méliès restarted rolling the film, his end result was that he had managed to make a bus transform into a hearse. This was just one of the great contributors to animation in the early years.
The earliest surviving stop-motion advertising film was an English short by Arthur Melbourne-Cooper called Matches: An Appeal (1899). Developed for the Bryant and May Matchsticks company, it involved stop-motion animation of wired-together matches writing a patriotic call to action on a blackboard.
J. Stuart Blackton was possibly the first American film-maker to use the techniques of stop-motion and hand-drawn animation. Introduced to film-making by Edison, he pioneered these concepts at the turn of the 20th century, with his first copyrighted work dated 1900. Several of his films, among them The Enchanted Drawing (1900) and Humorous Phases of Funny Faces (1906) were film versions of Blackton's "lightning artist" routine, and utilized modified versions of Méliès' early stop-motion techniques to make a series of blackboard drawings appear to move and reshape themselves. 'Humorous Phases of Funny Faces' is regularly cited as the first true animated film, and Blackton is considered the first true animator.

Another French artist, Émile Cohl, began drawing cartoon strips and created a film in 1908 called Fantasmagorie. The film largely consisted of a stick figure moving about and encountering all manner of morphing objects, such as a wine bottle that transforms into a flower. There were also sections of live action where the animator’s hands would enter the scene. The film was created by drawing each frame on paper and then shooting each frame onto negative film, which gave the picture a blackboard look. This makes Fantasmagorie the first animated film created using what came to be known as traditional (hand-drawn) animation.
Following the successes of Blackton and Cohl, many other artists began experimenting with animation. One such artist was Winsor McCay, a successful newspaper cartoonist, who created detailed animations that required a team of artists and painstaking attention for detail. Each frame was drawn on paper; which invariably required backgrounds and characters to be redrawn and animated. Among McCay's most noted films are Little Nemo (1911), Gertie the Dinosaur (1914) and The Sinking of the Lusitania (1918).
The production of animated short films, typically referred to as "cartoons", became an industry of its own during the 1910s, and cartoon shorts were produced to be shown in movie theaters. The most successful early animation producer was John Randolph Bray, who, along with animator Earl Hurd, patented the cel animation process which dominated the animation industry for the rest of the decade.

Monday, October 27, 2008

Bee Colony Optimization

The bee colony optimization algorithm is inspired by the behaviour of a honey bee colony in nectar collection. This biologically inspired approach is currently being employed to solve continuous optimization problems, training neural networks, mechanical and electronic components design optimization, combinatorial optimization problems such as job shop scheduling, the internet server optimization problem, the travelling salesman problem, etc.

Bee colony optimization in the job shop scheduling problem


Bee Colony Model for JSSP

The honey bees' effective foraging strategy can be applied to Job Shop Scheduling Problems.
A feasible solution in a Job Shop Scheduling Problem is a complete schedule of operations specified in the problem. We can think of each solution as a path from the hive to the food source. The figure on the right illustrates such an analogy
The makespan of the solution is analogous to the profitability of the food source in terms of distance and sweetness of the nectar. Hence, the shorter the makespan, the higher the profitability of the solution path.
We can thus maintain a colony of bees, where each bee will traverse a potential solution path. Once a feasible solution is found, each bee will return to the hive to perform a Waggle Dance. The Waggle Dance will be represented by a list of Elite Solutions (Chong et al., 2006), from which other bees can choose to follow another bee's path. Bees with a better makespan will have a higher probability of adding its path to the list of Elite Solutions, promoting a convergence to an optimal solution.
Using the above scheme, the natural honey bee's self organizing foraging strategy can be applied to the Job Shop Scheduling Problem.

Saturday, October 18, 2008

Ant Colony Optimization

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.
This algorithm is a member of ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants.

Overview
In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.
Thus, when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.

Detailed
The original idea comes from observing the exploitation of food resources among ants, in which ants’ individually limited cognitive abilities have collectively been able to find the shortest path between a food source and the nest.


  1. The first ant finds the food source (F), via any way (a), then returns to the nest (N), leaving behind a trail pheromone (b)
  2. Ants indiscriminately follow four possible ways, but the strengthening of the runway makes it more attractive as the shortest route.
  3. Ants take the shortest route, long portions of other ways lose their trail pheromones.

In a series of experiments on a colony of ants with a choice between two unequal length paths leading to a source of food, biologists have observed that ants tended to use the shortest route. A model explaining this behaviour is as follows:

  1. An ant (called "blitz") runs more or less at random around the colony;
  2. If it discovers a food source, it returns more or less directly to the nest, leaving in its path a trail of pheromone;
  3. These pheromones are attractive, nearby ants will be inclined to follow, more or less directly, the track;
  4. Returning to the colony, these ants will strengthen the route;
  5. If there are two routes to reach the same food source then, in a given amount of time, the shorter one will be traveled by more ants than the long route;
  6. The short route will be increasingly enhanced, and therefore become more attractive;
  7. The long route will eventually disappear because pheromones are volatile;
  8. Eventually, all the ants have determined and therefore "chosen" the shortest route.

Ants use the environment as a medium of communication. They exchange information indirectly by depositing pheromones, all detailing the status of their "work". The information exchanged has a local scope, only an ant located where the pheromones were left has a notion of them. This system is called "Stigmergy" and occurs in many social animal societies (it has been studied in the case of the construction of pillars in the nests of termites). The mechanism to solve a problem too complex to be addressed by single ants is a good example of a self-organized system. This system is based on positive feedback (the deposit of pheromone attracts other ants that will strengthen it themselves) and negative (dissipation of the route by evaporation prevents the system from thrashing). Theoretically, if the quantity of pheromone remained the same over time on all edges, no route would be chosen. However, because of feedback, a slight variation on an edge will be amplified and thus allow the choice of an edge. The algorithm will move from an unstable state in which no edge is stronger than another, to a stable state where the route is composed of the strongest edges.
The basic philosophy of the algorithm involves the movement of a colony of ants through the different states of the problem influenced by two local decision policies, viz., trails and attractiveness. Thereby, each such ant incrementally constructs a solution to the problem. When an ant completes a solution, or during the construction phase, the ant evaluates the solution and modifies the trail value on the components used in its solution. This pheromone information will direct the search of the future ants. Furthermore, the algorithm also includes two more mechanisms, viz., trail evaporation and daemon actions. Trail evaporation reduces all trail values over time thereby avoiding any possibilities of getting stuck in local optima. The daemon actions are used to bias the search process from a non-local perspective.

Thursday, October 16, 2008

Simulated Annealing

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution.
The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.
By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends both on the difference between the corresponding function values and also on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local optima—which are the bane of greedier methods.
The method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983, and by Vlado Černý in 1985. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953.

Overview
In the simulated annealing (SA) method, each point s of the search space is analogous to a state of some physical system, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy.

The basic iteration
At each step, the SA heuristic considers some neighbouring state s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.

The neighbours of a state
The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. For example, in the traveling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, as the previous example depicts, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts. In the previous example, the parts of the solution are the parts of the tour.
Searching for neighbours to a state is fundamental to optimization because the final solution will come after a tour of successive neighbours. Simple heuristics move by finding best neighbour after best neighbour and stop when they have reached a solution which has no neighbours that are better solutions. The problem with this approach is that a solution that does not have any immediate neighbours that are better solution is not necessarily the optimum. It would be the optimum if it was shown that any kind of alteration of the solution does not give a better solution and not just a particular kind of alteration. For this reason it is said that simple heuristics can only reach local optima and not the global optimum. Metaheuristics, although they also optimize through the neighbourhood approach, differ from heuristics in that they can move through neighbours that are worse solutions than the current solution. Simulated Annealing in particular doesn't even try to find the best neighbour. The reason for this is that the search can no longer stop in a local optimum and in theory, if the metaheuristic can run for an infinite amount of time, the global optimum will be found.

Acceptance probabilities
The probability of making the transition from the current state s to a candidate new state s' is specified by an acceptance probability function P(e,e',T), that depends on the energies e = E(s) and e' = E(s') of the two states, and on a global time-varying parameter T called the temperature.
One essential requirement for the probability function P is that it must be nonzero when e' > e, meaning that the system may move to the new state even when it is worse (has a higher energy) than the current one. It is this feature that prevents the method from becoming stuck in a local minimum—a state that is worse than the global minimum, yet better than any of its neighbours.
On the other hand, when T goes to zero, the probability P(e,e',T) must tend to zero if e' > e, and to a positive value if e' < e. That way, for sufficiently small values of T, the system will increasingly favor moves that go "downhill" (to lower energy values), and avoid those that go "uphill". In particular, when T becomes 0, the procedure will reduce to the greedy algorithm—which makes the move only if it goes downhill.
In the original description of SA, the probability P(e,e',T) was defined as 1 when e' < e — i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature. Many descriptions and implementations of SA still take this condition as part of the method's definition. However, this condition is not essential for the method to work, and one may argue that it is both counterproductive and contrary to its principle.
The P function is usually chosen so that the probability of accepting a move decreases when the difference e' − e increases—that is, small uphill moves are more likely than large ones. However, this requirement is not strictly necessary, provided that the above requirements are met.
Given these properties, the temperature T plays a crucial role in controlling the evolution of the state s of the system vis-a-vis its sensitivity to the variations of system energies. To be precise, for a large T, the evolution of s is sensitive to coarser energy variations, while it is sensitive to finer energy variations when T is small.

The annealing schedule
The name and inspiration of the algorithm demand an interesting feature related to the temperature variation to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially with T set to a high value (or infinity), and then it is decreased at each step following some annealing schedule—which may be specified by the user, but must end with T = 0 towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower; and finally move downhill according to the steepest descent heuristic.


Example illustrating the effect of cooling schedule on the performance of simulated annealing. The problem is to rearrange the pixels of an image so as to minimize a certain potential energy function, which causes similar colours to attract at short range and repel at a slightly larger distance. The elementary moves swap two adjacent pixels. These images were obtained with a fast cooling schedule (left) and a slow cooling schedule (right), producing results similar to amorphous and crystalline solids, respectively.
It can be shown that for any given finite problem, the probability that the simulated annealing algorithm terminates with the global optimal solution approaches 1 as the annealing schedule is extended. This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space.

Sunday, October 12, 2008

Genetic Algorithm

The genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.

Methodology
In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
Genetic algorithms find application in bioinformatics, phylogenetics, computational science, engineering, economics, chemistry, manufacturing, mathematics, physics and other fields.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of the solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, interactive genetic algorithms are used.
Once we have the genetic representation and the fitness function defined, GA proceeds to initialize a population of solutions randomly, then improve it through repetitive application of mutation, crossover, inversion and selection operators.
[edit]Initialization
Initially many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.

Selection
During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as this process may be very time-consuming.
Most functions are stochastic and designed so that a small proportion of less fit solutions are selected. This helps keep the diversity of the population large, preventing premature convergence on poor solutions. Popular and well-studied selection methods include roulette wheel selection and tournament selection.

Reproduction
The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination), and/or mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests more than two "parents" are better to be used to reproduce a good quality chromosome.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions, for reasons already mentioned above.
Although Crossover and Mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.

Termination
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:

  • A solution is found that satisfies minimum criteria
  • Fixed number of generations reached
  • Allocated budget (computation time/money) reached
  • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
  • Manual inspection
  • Combinations of the above

Simple generational genetic algorithm pseudocode

  1. Choose the initial population of individuals
  2. Evaluate the fitness of each individual in that population
  3. Repeat on this generation until termination: (time limit, sufficient fitness achieved, etc.)
  • Select the best-fit individuals for reproduction
  • Breed new individuals through crossover and mutation operations to give birth to offspring
  • Evaluate the individual fitness of new individuals
  • Replace least-fit population with new individuals

Tuesday, October 7, 2008

Neural Network

The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages:
Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.
This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: Biological neural network and Artificial neural network.

Overview
In general a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic microcircuits and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion, which have an effect on electrical signaling. As such, neural networks are extremely complex.
Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.
In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Most of the currently employed artificial neural networks for artificial intelligence are based on statistical estimation, optimization and control theory.
The cognitive modelling field involves the physical or mathematical modeling of the behaviour of neural systems; ranging from the individual neural level (e.g. modelling the spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modelling the release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioural modelling of the organism's response to stimuli). Artificial intelligence, cognitive modelling, and neural networks are information processing paradigms inspired by the way biological neural systems process data.

Saturday, September 27, 2008

Steganographic Techniques

Physical Steganography
Steganart example. Within this picture, the letter positions of a hidden message are represented by increasing numbers (1 to 20), and a letter value is given by its intersection position in the grid. For instance, the first letter of the hidden message is at the intersection of 1 and 4. So, after a few tries, the first letter of the message seems to be the 14th letter of the alphabet; the last one (number 20) is the 5th letter of the alphabet.
Steganography has been widely used, including in recent historical times and the present day. Possible permutations are endless and known examples include:
Hidden messages within wax tablets: in ancient Greece, people wrote messages on the wood, then covered it with wax upon which an innocent covering message was written.
Hidden messages on messenger's body: also used in ancient Greece. Herodotus tells the story of a message tattooed on a slave's shaved head, hidden by the growth of his hair, and exposed by shaving his head again. The message allegedly carried a warning to Greece about Persian invasion plans. This method has obvious drawbacks, such as delayed transmission while waiting for the slave's hair to grow, and the restrictions on the number and size of messages that can be encoded on one person's scalp.
In WWII, the French Resistance sent some messages written on the backs of couriers using invisible ink.
Hidden messages on paper written in secret inks, under other messages or on the blank parts of other messages.
Messages written in Morse code on knitting yarn and then knitted into a piece of clothing worn by a courier.
Messages written on the back of postage stamps.
During and after World War II, espionage agents used photographically produced microdots to send information back and forth. Microdots were typically minute, approximately less than the size of the period produced by a typewriter. WWII microdots needed to be embedded in the paper and covered with an adhesive (such as collodion). This was reflective and thus detectable by viewing against glancing light. Alternative techniques included inserting microdots into slits cut into the edge of post cards.
During World War II, a spy for Japan in New York City, Velvalee Dickinson, sent information to accommodation addresses in neutral South America. She was a dealer in dolls, and her letters discussed how many of this or that doll to ship. The stegotext was the doll orders, while the concealed "plaintext" was itself encoded and gave information about ship movements, etc. Her case became somewhat famous and she became known as the Doll Woman.
Cold War counter-propaganda. In 1968, crew members of the USS Pueblo (AGER-2) intelligence ship held as prisoners by North Korea, communicated in sign language during staged photo opportunities, informing the United States they were not defectors but rather were being held captive by the North Koreans. In other photos presented to the US, crew members gave "the finger" to the unsuspecting North Koreans, in an attempt to discredit photos that showed them smiling and comfortable.

Digital Steganography
Modern steganography entered the world in 1985 with the advent of the personal computer being applied to classical steganography problems. Development following that was slow, but has since taken off, going by the number of "stego" programs available: Over 800 digital steganography applications have been identified by the Steganography Analysis and Research Center. Digital steganography techniques include:
Concealing messages within the lowest bits of noisy images or sound files.
Concealing data within encrypted data or within random data. The data to be concealed is first encrypted before being used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like the one-time pad generates ciphertexts that look perfectly random if you don't have the private key).
Chaffing and winnowing.

Mimic functions convert one file to have the statistical profile of another. This can thwart statistical methods that help brute-force attacks identify the right solution in a ciphertext-only attack.
Concealed messages in tampered executable files, exploiting redundancy in the targeted instruction set.
Pictures embedded in video material (optionally played at slower or faster speed).
Injecting imperceptible delays to packets sent over the network from the keyboard. Delays in keypresses in some applications (telnet or remote desktop software) can mean a delay in packets, and the delays in the packets can be used to encode data.
Changing the order of elements in a set.
Content-Aware Steganography hides information in the semantics a human user assigns to a datagram. These systems offer security against a non-human adversary/warden.
Blog-Steganography. Messages are fractionalized and the (encrypted) pieces are added as comments of orphaned web-logs (or pin boards on social network platforms). In this case the selection of blogs is the symmetric key that sender and recipient are using; the carrier of the hidden message is the whole blogosphere.

Network Steganography
All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003. Contrary to the typical steganographic methods which utilize digital media (images, audio and video files) as a cover for hidden data, network steganography utilizes communication protocols' control elements and their basic intrinsic functionality. As a result, such methods are harder to detect and eliminate.
Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to the PDU (Protocol Data Unit), to the time relations between the exchanged PDUs, or both (hybrid methods).
Moreover, it is feasible to utilize the relation between two or more different network protocols to enable secret communication. These applications fall under the term inter-protocol steganography.
Network steganography covers a broad spectrum of techniques, which include, among others:
Steganophony - the concealment of messages in Voice-over-IP conversations, e.g. the employment of delayed or corrupted packets that would normally be ignored by the receiver (this method is called LACK - Lost Audio Packets Steganography), or, alternatively, hiding information in unused header fields.
WLAN Steganography – the utilization of methods that may be exercised to transmit steganograms in Wireless Local Area Networks. A practical example of WLAN Steganography is the HICCUPS system (Hidden Communication System for Corrupted Networks)

Printed steganography
Digital steganography output may be in the form of printed documents. A message, the plaintext, may be first encrypted by traditional means, producing a ciphertext. Then, an innocuous covertext is modified in some way so as to contain the ciphertext, resulting in the stegotext. For example, the letter size, spacing, typeface, or other characteristics of a covertext can be manipulated to carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it. Francis Bacon developed Bacon's cipher as such a technique.
The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example is ASCII Art Steganography.

Steganography using Sudoku Puzzle
This is the art of concealing data in an image using Sudoku which is used like a key to hide the data within an image. Steganography using sudoku puzzles has as many keys as there are possible solutions of a Sudoku puzzle, which is . This is equivalent to around 70 bits, making it much stronger than the DES method which uses a 56 bit key.

Tuesday, September 23, 2008

Steganography

Steganography is the art and science of writing hidden messages in such a way that no one, apart from the sender and intended recipient, suspects the existence of the message, a form of security through obscurity. The word steganography is of Greek origin and means "concealed writing" from the Greek words steganos (στεγανός) meaning "covered or protected", and graphein (γράφειν) meaning "to write". The first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography disguised as a book on magic. Generally, messages will appear to be something else: images, articles, shopping lists, or some other covertext and, classically, the hidden message may be in invisible ink between the visible lines of a private letter.
The advantage of steganography, over cryptography alone, is that messages do not attract attention to themselves. Plainly visible encrypted messages—no matter how unbreakable—will arouse suspicion, and may in themselves be incriminating in countries where encryption is illegal. Therefore, whereas cryptography protects the contents of a message, steganography can be said to protect both messages and communicating parties.
Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a transport layer, such as a document file, image file, program or protocol. Media files are ideal for steganographic transmission because of their large size. As a simple example, a sender might start with an innocuous image file and adjust the color of every 100th pixel to correspond to a letter in the alphabet, a change so subtle that someone not specifically looking for it is unlikely to notice it.

Ancient Steganography
The first recorded uses of steganography can be traced back to 440 BC when Herodotus mentions two examples of steganography in The Histories of Herodotus. Demaratus sent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in common use then as reusable writing surfaces, sometimes used for shorthand. Another ancient example is that of Histiaeus, who shaved the head of his most trusted slave and tattooed a message on it. After his hair had grown the message was hidden. The purpose was to instigate a revolt against the Persians.

Additional Terminology
In general, terminology analogous to (and consistent with) more conventional radio and communications technology is used; however, a brief description of some terms which show up in software specifically, and are easily confused, is appropriate. These are most relevant to digital steganographic systems.
The payload is the data to be covertly communicated. The carrier is the signal, stream, or data file into which the payload is hidden; which differs from the "channel" (typically used to refer to the type of input, such as "a JPEG image"). The resulting signal, stream, or data file which has the payload encoded into it is sometimes referred to as the package, stego file, or covert message. The percentage of bytes, samples, or other signal elements which are modified to encode the payload is referred to as the encoding density and is typically expressed as a number between 0 and 1.
In a set of files, those files considered likely to contain a payload are called suspects. If the suspect was identified through some type of statistical analysis, it might be referred to as a candidate.

Countermeasures and Detection
Detection of physical steganography requires careful physical examination, including the use of magnification, developer chemicals and ultraviolet light. It is a time-consuming process with obvious resource implications, even in countries where large numbers of people are employed to spy on their fellow nationals. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps. During World War II, a technology used to ease monitoring of POW mail was specially treated paper that would reveal invisible ink. An article in the June 24, 1948 issue of Paper Trade Journal by the Technical Director of the United States Government Printing Office, Morris S. Kantrowitz, describes in general terms the development of this paper, three prototypes of which were named Sensicoat, Anilith, and Coatalith paper. These were for the manufacture of post cards and stationery to be given to German prisoners of war in the US and Canada. If POWs tried to write a hidden message the special paper would render it visible. At least two US patents were granted related to this technology, one to Mr. Kantrowitz, No. 2,515,232, "Water-Detecting paper and Water-Detecting Coating Composition Therefor", patented July 18, 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof", No. 2,445,586, patented July 20, 1948. A similar strategy is to issue prisoners with writing paper ruled with a water-soluble ink that "runs" when in contact with a water-based invisible ink.
In computing, detection of steganographically encoded packages is called steganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known-clean copies of these materials and compare them against the current contents of the site. The differences, assuming the carrier is the same, will compose the payload. In general, using extremely high compression rate makes steganography difficult, but not impossible. While compression errors provide a hiding place for data, high compression reduces the amount of data available to hide the payload in, raising the encoding density and facilitating easier detection (in the extreme case, even by casual observation).

Saturday, September 13, 2008

History of Artificial Intelligence

The history of artificial intelligence began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of artificial intelligence research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.
Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of Sir James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field. Undaunted, there are those who make extraordinary predictions even now.
Progress in AI has continued, despite the rise and fall of its reputation in the eyes of government bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have been solved and the solutions are now used in successful commercial products. However, no machine has been built with a human level of intelligence, contrary to the optimistic predictions of the first generation of AI researchers. "We can only see a short distance ahead," admitted Alan Turing, in a famous 1950 paper that catalyzed the modern search for machines that think. "But," he added, "we can see much that must be done."

Tuesday, September 9, 2008

Artificial Intelligence

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long term goals.

Sunday, August 10, 2008

Multimedia Usage

Multimedia finds its application in various areas including, but not limited to, advertisements, art, education, entertainment, engineering, medicine, mathematics, business, scientific research and spatial temporal applications. Several examples are as follows:

Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to entertainment, to commercial art, to journalism, to media and software services provided for any of the industries listed below. An individual multimedia designer may cover the spectrum throughout their career. Request for their skills range from technical, to analytical, to creative.

Commercial
Much of the electronic old and new media used by commercial artists is multimedia. Exciting presentations are used to grab and keep attention in advertising. Business to business, and interoffice communications are often developed by creative services firms for advanced multimedia presentations beyond simple slide shows to sell ideas or liven-up training. Commercial multimedia developers may be hired to design for governmental services and nonprofit services applications as well.

Entertainment and fine arts
In addition, multimedia is heavily used in the entertainment industry, especially to develop special effects in movies and animations. Multimedia games are a popular pastime and are software programs available either as CD-ROMs or online. Some video games also use multimedia features. Multimedia applications that allow users to actively participate instead of just sitting by as passive recipients of information are called Interactive Multimedia. In the Arts there are multimedia artists, whose minds are able to blend techniques using different media that in some way incorporates interaction with the viewer. One of the most relevant could be Peter Greenaway who is melding Cinema with Opera and all sorts of digital media. Another approach entails the creation of multimedia that can be displayed in a traditional fine arts arena, such as an art gallery. Although multimedia display material may be volatile, the survivability of the content is as strong as any traditional media. Digital recording material may be just as durable and infinitely reproducible with perfect copies every time.

Education
In Education, multimedia is used to produce computer-based training courses (popularly called CBTs) and reference books like encyclopedia and almanacs. A CBT lets the user go through a series of presentations, text about a particular topic, and associated illustrations in various information formats. Edutainment is an informal term used to describe combining education with entertainment, especially multimedia entertainment.
Learning theory in the past decade has expanded dramatically because of the introduction of multimedia. Several lines of research have evolved (e.g. Cognitive load, Multimedia learning, and the list goes on). The possibilities for learning and instruction are nearly endless.
The idea of media convergence is also becoming a major factor in education, particularly higher education. Defined as separate technologies such as voice (and telephony features), data (and productivity applications) and video that now share resources and interact with each other, synergistically creating new efficiencies, media convergence is rapidly changing the curriculum in universities all over the world. Likewise, it is changing the availability, or lack thereof, of jobs requiring this savvy technological skill.
various educational packages are now available which are imparting information to kids .They are generally available in CDs etc.

Journalism
Newspaper companies all over are also trying to embrace the new phenomenon by implementing its practices in their work. While some have been slow to come around, other major newspapers like The New York Times, USA Today and The Washington Post are setting the precedent for the positioning of the newspaper industry in a globalized world.
News reporting is not limited to traditional media outlets. Freelance journalists can make use of different new media to produce multimedia pieces for their news stories. It engages global audiences and tells stories with technology, which develops new communication techniques for both media producers and consumers. Common Language Project is an example of this type of multimedia journalism production.

Engineering
Software engineers may use multimedia in Computer Simulations for anything from entertainment to training such as military or industrial training. Multimedia for software interfaces are often done as a collaboration between creative professionals and software engineers.

Industry
In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and coworkers. Multimedia is also helpful for providing employee training, advertising and selling products all over the world via virtually unlimited web-based technology

Mathematical and scientific research
In mathematical and scientific research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia.

Medicine
In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is affected by diseases spread by viruses and bacteria and then develop techniques to prevent it.

Document imaging
Document imaging is a technique that takes hard copy of an image/document and converts it into a digital format (for example, scanners).

Disabilities
Ability Media allows those with disabilities to gain qualifications in the multimedia field so they can pursue careers that give them access to a wide array of powerful communication forms.

Miscellaneous
In Europe, the reference organisation for Multimedia industry is the European Multimedia Associations Convention (EMMAC).

Saturday, August 9, 2008

Multimedia

Multimedia is media and content that uses a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms. The term is used in contrast to media which only use traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, and interactivity content forms.
Multimedia is usually recorded and played, displayed or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia (as an adjective) also describes electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in fine art; by including audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application.

Categorization of multimedia
Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses without any navigational control for the viewer such as a cinema presentation. Non-linear content offers user interactivity to control progress as used with a computer game or used in self-paced computer based training. Hypermedia is an example of non-linear content.
Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.

Major characteristics of multimedia
Multimedia presentations may be viewed in person on stage, projected, transmitted, or played locally with a media player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-demand.
Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.
The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example to make it easier and faster to convey information. Or in entertainment or art, to transcend everyday experience.

Enhanced levels of interactivity are made possible by combining multiple forms of media content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on Web sites like photo galleries with both images (pictures) and title (text) user-updated, to simulations whose co-efficients, events, illustrations, animations or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and hearing, Haptic technology enables virtual objects to be felt. Emerging technology involving illusions of taste and smell may also enhance the multimedia experience.

Tuesday, August 5, 2008

Game Maker Language

Game Maker Language (GML) is a scripting language developed for use with a computer game creation application called Game Maker. It was originally created by Mark Overmars to supplement the drag-and-drop action system used in Game Maker. However, in the latest versions, all the drag-and-drop actions translate to GML rather than being separate from it.
GML is heavily integrated with the Game Maker environment. Usually, elements such as sprites and sounds are all organized within the Game Maker IDE (though they can also be loaded from external files). Game Maker's architecture is designed to handle such things as event detection, level design, and object configuration without the need to code them manually, minimizing code verbosity with intuitive interface features.
A common misconception is that languages such as Pascal and C++ can be directly used in GML. This is incorrect, and is a common mistake due to GML's ability to utilise Pascal and C++ style syntax (e.g. "&&" is interchangeable with "and").

Libraries
In Game Maker, a set of drag-and-drop actions is called a library. In the Game Maker interface, these libraries are displayed as tabs containing icons called actions. Each action is a GML script or function that users can use in their games. Game Maker comes with a default set of libraries that contain the common actions used by most games; it is also possible to create libraries using the Library builder provided separately from Game Maker. There are many libraries that a Game Maker user may download to avoid using GML to achieve certain tasks. For example: If a user wants to make a simple 3D game but has no experience with GML, they can download a 3D Library.

GML syntax and semantics
GML is structurally similar to C-based languages in its use of code blocks, function calls, variable assignments, operator syntax, and so on.
GML makes a difference between statements and expressions. For example g < 1; is not a valid statement and GM will return an error. Also, variable assignment is always a statement in GM, and cannot be used in an expression.

Friday, August 1, 2008

Game Maker

Game Maker (often abbreviated to GM) is a Windows and Mac IDE originally developed by Mark Overmars in the Delphi programming language. It is currently developed and published by YoYo Games, a software company in which Overmars is involved. Game Maker allows users to easily develop computer games without the requirement of prior computer programming experience, while allowing advanced users to create complex applications with its built-in scripting language.
The latest stable release of Game Maker for Windows is version 8 as of December 2009, and version 7 on Mac as of 11 August 2010. Since its initial release in 1999, Game Maker gained many new features, notably 3D graphics support, as well as a significant user base, with YoYo Games providing free hosting for user-created games.

Development history
Game Maker was originally titled Animo, a program specializing in 2D animation. Overmars released the first public version (version 1.1) on November 15, 1999. While this version of Game Maker had a built-in scripting language, which was not as complex as in more recent versions, it and the next few versions of Game Maker did not have DirectX support, a separate runner to run games independently from Game Maker, or the ability to compile games into executable files.Each major release of Game Maker added substantial new features and improved stability, while gaining steadily in popularity. In 2001, version 3.0 implemented DirectX for the first time, while version 4.0 (released July 2001) was rewritten from scratch, changing the interface significantly. Version 5.0 was released in April 2003, adding support for external data files and time lines. In version 6.0, released October 2004, Game Maker's graphics engine was rewritten using Direct3D as a base, allowing for more complex operations such as easier alpha transparency and sprite rotation, as well as introducing 3D graphics functions. Overmars began work on version 7.0, which introduced the ability to extend its functionality, around the summer of 2006, and released it on February 28, 2007, through YoYo Games. Game Maker 8 was released on December 22, 2009, adding new features such as a revamped script editor window, and the ability to import and export resources from game source files.Starting with Game Maker 7 RC2, game data created with the program was encrypted[citation needed], due to concerns over hacking.

Design and Uses
Game Maker is designed to allow its users to easily develop computer games without having to learn a complex programming language such as C++ or Java.
Game Maker's primary development interface uses a drag-and-drop system, allowing users unfamiliar with traditional programming to intuitively create games by visually organizing icons on the screen. These icons represent actions that would occur in a game, such as movement, basic drawing, and simple control structures. Users also have the ability to create their own "action libraries" using the Library Maker.
For experienced users or those with computer programming experience, Game Maker contains a built-in scripting programming language called the Game Maker Language (GML), allowing more complex games to be made with the program.
Game Maker allows the creation of many types of games, including platform games, first-person shooters, third-person shooters, massively multiplayer online games and construction and management simulation games.

Educational use
As a professor of the University of Utrecht, Mark Overmars developed Game Maker partly as a teaching aid for his students. It is gaining recognition as a useful teaching tool in primary and secondary schools because of its easy entry and sophisticated scripting language.

Lite and Pro editions
Two versions of the Game Maker software are offered on Windows. The Lite version is free to use, while the Pro edition requires purchase. The Lite version displays a small Game Maker advertisement during the loading of the game, while the Pro version removes this. The Lite version contains most of the functionality that allows users to create games and share them either by compiling them into a Windows executable file, or publishing them on YoYo Games' website. The Lite version locks out several advanced features and functions, which are available in the Pro edition. The Pro edition contains functionality and features not available in the free Lite edition. Such features include the ability to use DLLs, particle systems, advanced drawing functions, and 3D graphics. On the Macintosh version of Game Maker, a trial version with all features unlocked can be used for 10 hours before it requires activation.

Graphics capabilities
Game Maker primarily runs games that use 2D graphics. Game Maker's graphics capabilities underwent significant improvements with each major release version, allowing for additional functionality including more efficient alpha adjustments and blending settings for sprites and other shapes. By version 6.0 (Windows), Game Maker incorporated DirectX, allowing more advanced graphics functions. Version 7.0 (Macintosh) uses OpenGL to render sprites.
Additionally in version 6.0 (Windows), Game Maker incorporated Direct3D, allowing the use of limited 3D graphics. Version 7.0 (Macintosh) uses OpenGL for 3D graphics. It also adds limited support for simple 3D models. Converters make it possible to use more popular 3D formats such as .3ds, and .obj for use in a 3D project. It also supports the ability to create particle effects such as rain, snow and clouds. Support for the editing of 32-bit PNG files was added in the 8.0 version of the Game Maker, which has also enabled users to use images with alpha channels.
One such example of a popular three-dimensional game would be Paradox 3D.

Monday, July 28, 2008

Sonic the Hedgehog (Game)

Sonic the Hedgehog (ソニック・ザ・ヘッジホッグ Sonikku za Hejjihoggu?), trademarked Sonic The Hedgehog, is a video game character and the main protagonist of the Sonic video game series released by Sega, as well as in numerous spin-off comics, cartoons, and a feature film. The first Sonic game was released on June 23, 1991, in order to provide Sega with a mascot to rival Nintendo's flagship character Mario (see 1991 in video gaming). Since then, Sonic has become one of the world's best-known video game characters, with his series having sold more than 70 million copies. In 2005, Sonic was one of the first game character inductees into the Walk of Game, alongside Mario and Link.
Artist Naoto Ōshima, designer Hirokazu Yasuhara, and programmer Yuji Naka are generally credited with the creation of the character, a blue 15-year-old anthropomorphic hedgehog, who has the ability to run faster than the speed of sound and the ability to curl into a ball, primarily to attack enemies. This is a major part of the gameplay of the series.

Origins and history
Sega wanted a game capable of competing with Mario and a character to replace Alex Kidd as the company's mascot. Several character designs were submitted by its AM8 research and development department, including an armadillo (which then developed into Mighty the Armadillo), a dog, a Theodore Roosevelt look-alike in pajamas (which would later be the basis of Dr. Robotnik/Eggman's design), and a rabbit (intended to use its extendible ears to collect objects; these aspects were later incorporated into Ristar). Eventually, Naoto Ōshima's spiky teal hedgehog, initially codenamed "Mr. Needlemouse", was chosen as the new mascot. (Note that "needlemouse" is a direct translation of the Japanese word for hedgehog ハリネズミ (harinezumi).) Sonic's blue pigmentation was chosen to match Sega's cobalt blue logo, his shoes were a concept evolved from a design inspired by Michael Jackson's boots with the addition of the color red, which was inspired by both Santa Claus and the contrast of those colors on Jackson's Bad, while his personality was based on Bill Clinton's "Get it done" attitude. The character was created without the ability to swim because of a mistaken assumption by Yuji Naka that all hedgehogs could not do so. A group of fifteen people started working on the first Sonic the Hedgehog game, and renamed themselves Sonic Team. The game's soundtrack was composed by Masato Nakamura of the band Dreams Come True. Sega sponsored the group's "Wonder 3" tour, painting Sonic on the tour bus, distributing pamphlets advertising the game, and having footage of the game broadcast above stage prior to its release.
The original concepts had Sonic with fangs and in a band with a human girlfriend named Madonna, however a team from Sega of America led by Madeline Schroeder, who calls herself "Sonic's mother", "softened" the character up for an American audience by removing these, sparking a heated issue with Sonic Team, although Naka later admitted it was probably for the best. Sonic's appearance varies greatly depending on the medium and the style in which he is drawn. In the video games, Sonic's original design by Oshima was quite short and round, with short quills, a round body and no visible irises. Artwork featuring this design and drawn by Akira Watanabe was displayed on the package artwork for Sonic the Hedgehog, and most subsequent Sonic video games featured similar designs.
When Sonic the Hedgehog 2 for the Mega Drive appeared, the proportions of Sonic changed. The original 1:2 head to height ratio became 1:2.5.
Beginning with Sonic Adventure, in 1998, Sonic was redesigned by Yuji Uekawa as a taller character with longer legs and a less spherical body, longer and more drooping quills, the addition of shoe buckles, and green-colored irises. Further subtle changes to the character's design have been made in subsequent games. Spin-off media such as comics and cartoons have featured variations on all these video game designs, with restrictions set by the standardized model sheets.

Actor portrayal
Different actors have provided the voice for Sonic in his game appearances. Sonic originally had a few voice samples in Sonic CD, with Keiko Utoku providing the voice. Sonic's first true voice actor was Takeshi Kusao for the arcade game SegaSonic the Hedgehog, with Junichi Kanemaru continually voicing the role beginning with the release of Sonic Adventure. In Sonic Unleashed, Sonic is voiced by Tomokazu Seki whilst in Werehog form. Sonic's first English voice actor was Jaleel White (better known to fans as Steve Urkel on the TV show Family Matters) in the three animated series Adventures of Sonic the Hedgehog, Sonic the Hedgehog (SatAM) and Sonic Underground. Sonic's first English game voice was provided by Ryan Drummond beginning with Sonic Adventure, a role he continued until 2004, when he was replaced by Jason Anthony Griffith, who previously voiced the character in the American dub of the anime series Sonic X. Griffith was later replaced by Roger Craig Smith, starting with Sonic Free Riders and Sonic Colors in November 2010.