Skip to main content

Currently Skimming:

8 Biological Inspiration for Computing
Pages 247-298

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 247...
... , rapidly process large amounts of data in a massively parallel fashion, learn from their environment with minimal human intervention, and "evolve" to become better adapted to what they are supposed to do. There is little doubt that such computer systems with these properties would be highly desirable.
From page 248...
... On the other hand, biological organisms operate within a set of constraints that may limit their suitability as sources of inspiration for computing. Perhaps the most important constraint is the fact that biological organisms emerge from natural selection and the evolutionary process.
From page 249...
... In some cases, it may be possible to overcome problems found in the actual biological system when the principles underlying them are implemented in engineered artifacts. · As noted in Chapter 1, even when biology cannot provide insight into potential computing solutions, the drive to solve biological problems can still inspire interesting, relevant, and intellectually challenging research in computing -- so biology can serve as a useful and challenging problem domain for computing.3 3 For example, IBM used the problem of protein folding to motivate the development of the BlueGene/L supercomputer.
From page 250...
... Similarly, although the field of "artificial neural networks" is an information-processing paradigm inspired by the parallel processing capabilities and structure of nerve tissue, and it attempts to mimic learning in biology by learning to adjust "synaptic" connections between artificial processing elements, the extent to which an artificial neural network reflects real neural systems may be tenuous. 8.1.3 Multiple Roles: Biology for Computing Insight Biological inspiration can play many different roles in computing, and confusion about this multiplicity of meanings accounts for a wide spectrum of belief about the value of biology for developing better computer systems and improved performance of computational tasks.
From page 251...
... An example of using a biological metaphor for understanding some dimension of computing relates to computer security. From many centuries of observation, it is well known that an ecology based on a monoculture is highly vulnerable to threats that are introduced from the outside.
From page 252...
... Or, it may implement an architecture or a way to organize and design the structural and dynamic relationships between elements in a complex system, knowledge of which might greatly improve the design of an engineered artifact. In this category are the neural network architecture as inspired by the activation model of dendrites and axons in the brain, evolutionary computation as driven by genomic changes and selection pressures, and the use of electroactive polymers as actuator mechanisms for robots, inspired by the operation of animal muscles (rather than, for example, gears)
From page 253...
... Today, the Reynolds simulation is regarded as one of the best and most evocative demonstrations of emergent behavior, in which complex global behavior arises from the interaction of simple local rules. The approach embodied in the simple-rule/complex-behavior approach has become a widely used technique in computer animation -- which was Reynolds' primary interest in the first place.12 8T.
From page 254...
... Eberhart, "Particle Swarm Optimization," pp. 1942-1948 in Proceedings of the IEEE International Conference on Neural Networks, IEEE Service Center, Piscataway, NJ, 1995; R
From page 255...
... Today, swarm algorithms are based on the loose and imprecise specification of a relatively small number of parameters -- but it is almost certainly true that engineered artifacts that exhibit complex designed behavior will require the tight specification of many parameters. This point is perhaps most obvious in the cooperative construction problem, where the rule sets that produce interesting, complex structures are actually very rare; most self-organized structures look more like random blobs.20 The same problem is common to all collective behaviors; finding the right rules is still largely a matter of trial and error -- not least because it is in the very nature of emergence for a simple-seeming change in the rules to produce a huge change in the outcome.
From page 256...
... Brooks claims that the subsumption architecture is capable of accounting for the behavior of insects, such as a house fly, using a combination of simple machines with no central control, no shared representation, slow switching rates, and low-bandwidth communication. This results in robust and reliable behavior despite its limited sensing capability and an unpredictable environment, because individual behaviors can compensate for each others' failures, resulting in coherent and emergent behavior despite the limitations of the component behaviors.
From page 257...
... Specifically, their algorithm is based on the repetition of a straight-line run for a certain time, followed by a random change in direction that sets up the direction for a new run. If the bacterium senses a higher concentration in its immediate environment, the run length is longer.
From page 258...
... Evans, and L Davidson, "A Biologically Inspired Programming Model for Self-healing Systems," Proceedings of the First Workshop on Self-Healing Systems, November 2002, available at http://www-2.cs.cmu.edu/~garlan/woss02/.
From page 259...
... In support of this approach, a number of reports39 cite security problems that arise from flaws in security policy, bugs in programs, and configuration errors and argue that correcting these flaws, bugs, and errors will result in greater security. A complementary approach is to take as a given the inability to control the computing or network environment.40 This approach is based on the idea that computer security can result from the use of system design principles that are more appropriate for the imperfect, uncontrolled, and open environments in which most computers and networks currently exist.
From page 260...
... 8.2.5.3 Immunological Design Principles for Computer Security The immune system exhibits a number of characteristics -- one might call them design principles-that could reasonably describe how effective computer security mechanisms might operate in a computer system or network. (As in Section 5.4.4.3, "immune system" is understood to mean the adaptive immune system.)
From page 261...
... By contrast, computer security systems that look for precise signatures of intruders (e.g., viruses) are easily circumvented.
From page 262...
... In the context of computer security, Forrest and Hofmeyr have described models for network intrusion detection and virus detection.49 In the network intruder detection example, self is defined through a set of "normal" connections in a local area network. Each connection is defined by a triplet consisting of the addresses of the two parties in communication with each other and the port over which they communicate (a total of 49 bits)
From page 263...
... Thus, it is interesting to consider how a number of immunological mechanisms known today might be useful in making the analogy closer, using the functions and design principles of these specific mechanisms within the general context of an immunologically based approach to computer security. One such mechanism is antigen processing and the major histocompatibility complex (MHC)
From page 264...
... Given that the immune system is a very complex entity whose operation is not fully understood, a bottom-up development of a computer security system based on the immune system is not possible today. The human immune system has evolved to its present state due to many evolutionary accidents as well as the constraints imposed by biology and chemistry -- much of which is likely to be artifactual and mostly irrelevant to the underlying principles that the system embodies and also to the design of a computer security system.
From page 265...
... Evolutionary computation is inspired by genetics and evolutionary events.56 Given a particular problem for which a solution is desired, evolutionary computation requires three components: 54 H Abelson, T.F.
From page 266...
... using only local interactions between identically programmed deformable cells. That is, the global shape results from a coordinated set of local shape changes in individual cells.
From page 267...
... , or a result that wins or ranks highly in a judged competition involving human contestants.58 Evolutionary computation has demonstrated successes according to all of these measures. For example, there are at least 21 instances in which evolutionary techniques have led to artifacts related to previously patented inventions.59 Eleven of these infringe on previously issued patents, and ten duplicate the functionality of previously patented inventions in a non-infringing way.
From page 268...
... Moreover, evolutionary techniques tend to work best on problems involving relatively large search spaces and large numbers of variables that are not well understood. Evolutionary algorithms have been able to construct and adapt complex neural networks that are intractable analytically or for which derivative-based back-propagation is inapplicable.
From page 269...
... That is, the particular DNA sequence of an organism can be said to be biology's representation of a "solution" to the problem of adapting the organism to a particular set of evolutionary selective pressures. From the standpoint of someone solving a problem with techniques from evolutionary computation, the question arises as to the analogue of DNA.
From page 270...
... Thus, one intellectual thrust in evolutionary computation focuses on the creation of developmental mechanisms that can be evolved to better create their own complexity. For example, evolutionary techniques can be used to evolve neural networks (see Section 8.3.3.2)
From page 271...
... . 8.3.1.7 Behavior of Evolutionary Processes Today, those working in evolutionary computation are not able to predict, in general, how long it will take to evolve some desired solution or determine a priori how large an initial population size should be, how rapidly mutations should occur, or how often genetic crossovers should take place.
From page 272...
... Full, "The Role of the Mechanical System in Control: A Hypothesis of Self-stabilization in Hexapedal Runners," Philosophical Transactions of the Royal Society of London B 354:849-862, 1999; A Altendorfer et al., "RHex: A Biologically Inspired Hexapod Runner," Journal of Autonomous Robots 11:207-213, 2002.
From page 273...
... Inoue, eds., MIT Press, Cambridge, MA, 1985.
From page 274...
... Neural networks are among the most successful of biology-inspired computational systems and are modeled on the massively parallel architecture of the brain -- and on the brain's inherent ability to learn 74C. Koch, "What Can Neurobiology Teach Computer Engineers?
From page 275...
... These brain-like characteristics give neural networks some decided advantages over traditional algorithms in certain contexts and problem types. Because they can learn, for example, the networks can be trained to recognize patterns and compute functions for which no rigorous algorithms are known, simply by being shown examples.
From page 276...
... Neural networks' pattern-detection ability has likewise made them a useful tool for fingerprint matching, face identification, and surveillance applications.90 · Robot navigation. Neural networks' ability to extract relevant features from noisy sensor data can help autonomous robots do a better job of avoiding obstacles.91 · Detection of medical phenomena.
From page 277...
... After this training period, the expertise of the network can be used to warn a technician of an upcoming breakdown, before it occurs and causes costly unforeseen "downtime." · Engine management. Neural networks have been used to analyze the input of sensors from an engine.
From page 278...
... (In practice, each agent could be modeled as an expert system, a neural network, or any number of other ways.) The first ant-based optimization -- the Ant Colony Optimization algorithm -- was created in the early 1990s.96 The algorithm is based on observations of ant foraging, something that ants do with high efficiency.
From page 279...
... 100 E Bonabeau, "Swarm Intelligence," presented at the O'Reilly Emerging Technology Conference, 2003, April 22-25, 2003, Santa Clara, CA, available at http://conferences.oreillynet.com/presentations/et2003/Bonabeau_eric.ppt.
From page 280...
... For a wasp carrying a load of wood pulp, say, such a rule might be, "If you're surrounded by three walls, then deposit the pulp." In general, each insect will modify the environment encountered by the others, and the structure will organize itself in much the same way that the proteins comprising a virus particle assemble themselves inside an infected cell. Ant algorithms are conceptually similar to the particle swarm optimization algorithm described in Section 8.2.1.
From page 281...
... Biomolecular computing provides a number of advantages that make it quite attractive as a potential base for computation. Most obvious are its information density, about 1021 bits per gram (billions of times more dense than magnetic tape)
From page 282...
... a specific DNA sequence. 1 Generate all possible paths Combine large amounts of these DNA sequences, through the graph.
From page 283...
... In some variants, DNA strands are chemically anchored to various types of beads; these beads can be designed with different properties, such as being magnetic or electrically charged, allowing the manipulation of the DNA strands through the application of electromagnetic fields. Another solution is to use microfluidic technologies, which consist of MEMS devices that operate as valves and pumps; a properly designed system of pipettes and microfluidic devices offers significant advantages by automating tasks and reducing the total volume of materials required.110 Still another variant is to restrict the chemical operations to a surface, rather than to a threedimensional volume.111 In this approach, DNA sequences, perhaps representing all of the solution space of an NP problem, would be chemically attached to a surface.
From page 284...
... For this, and indeed any biomolecular computing system, a challenge is the transformation of information from digital representation into biomolecules and back again. Traditional molecular biological engineering has provided a number of tools for synthesizing DNA sequences and reading them out; however, these tend to be fairly lengthy processes.
From page 285...
... . That is, while the number of different DNA sequences required grows linearly with the number of directed paths in a graph, the volume of those DNA sequences needed to solve a given problem is exponential in the problem's size (in this case, the number of nodes in the graph)
From page 286...
... This model would be best used for massively parallel applications, since the individual operations on DNA are still quite slow compared to electronic components, but it would offer massive improvements of density and energy efficiency over traditional computers. In a slightly different approach, enzymes that operate on DNA sequences are used as logic gates, such as XOR, AND, or NOT.
From page 287...
... It usually involves the creation of novel biological functions, such as custom metabolic or genetic networks, novel amino acids and proteins, and even entire cells. For example, a synthetic biology project may seek to modify Escherichia coli to fluoresce in the presence of TNT, creating in effect a new organism that can be used for human purposes.117 In one sense, this is a mirror image of natural selection: adding new features to lineages not through mutation and blind adaptation to an environment, but through planned design and forethought.
From page 288...
... However, all of these goals will require a different set of approaches and techniques than traditional biology or any natural science provides. While synthetic biology employs many of the same techniques and tools as systems biology -- simulation, computer models of genetic networks, gene sequencing and identification, massively parallel experiments -- it is more of an engineering discipline than a purely natural science.
From page 289...
... Once a logic gate is created, all of the digital logic design principles and tools developed for use in the electronic domain are in principle applicable to the construction of systems involving cellular logic. A basic construct in digital logic is the inverting gate.
From page 290...
... 8.4.2.3 Broader Views of Synthetic Biology While cellular logic emphasizes the biological network as a substrate for digital computing, synthetic biology can also use analog computing. To support analog computing, the biomolecular networks involved would be sensitive to small changes in concentrations of substances of interest.
From page 291...
... , but synthetic biology would allow finer control, increased accuracy, and the ability to customize such processes in terms of quantity, precise molecular characteristics, and chemical pathways, even when the desired characteristics are not available in nature. 8.4.2.5 Challenges Synthetic biology brings the techniques and metaphor of electronic design to modify biomolecular networks.
From page 292...
... Some researchers have suggested that synthetic biology needs an "Asilomar" conference, by analogy to the conference in 1975 that established the ground rules for genetic engineering.132 Some technical approaches to answer these concerns are possible, however. These include "barcoding" engineered organisms, that is, including a defined marker sequence of DNA in their genome (or in every inserted sequence)
From page 293...
... One important focus of DNA self-assembly research draws on the theory of Wang tiles, a mathematical theory of tiling first laid out in 1961.137 Wang tiles are polygons with colored edges, and they must be laid out in a pattern such that the edges of any two neighbors are the same color. Later, Berger established three important properties of tiling: the question of whether a given set of tiles could cover an area was undecidable; aperiodic sets of tiles could cover an area; and tiling could simulate a universal Turing machine,138 and thus was a full computational system.139 The core of DNA self-assembly is based on constructing special forms of DNA in which strands cross over between multiple double helices, creating strong two-dimensional structures known as DNA tiles.
From page 294...
... The manipulation of DNA sequences by enzymatic activity has the potential to be a very sequence-specific methodology for the fabrication of DNA nanostructures.4 DNA-modified Nanoparticles Nanoscale objects that incorporate DNA molecules have been used successfully to create biosensor materi als. In one example, the DNA is attached to a nanometer-sized gold particle, and then the nucleic acid is used to provide biological functionality,while the optical properties of the gold nanoparticles are used to report particle-particle interactions.5 Semiconductor particles can also be used, and recently the attachment of DNA to dendrimers or polypeptide nanoscale particles has been exploited for both sensing and drug delivery.6 DNA Code Design To successfully self-assemble nucleic acid nanostructures by hybridization, the DNA sequences (often re ferred to as DNA words)
From page 295...
... Condon, and R.M. Corn, "On Combinatorial Word Design," DIMACS Series in Discrete Mathematics and Theoretical Computer Science 54:75-90, 2000.
From page 296...
... Computation through self-assembly is an attractive alternative to traditional exhaustive search DNA computation. Although traditional DNA computation, such as performed by Adleman, required a linear number of steps with the input size, in algorithmic self-assembly, the computation occurs in a single step.
From page 297...
... The worker cells have complex programs, developed through amorphous computing technology. The programs control how the workers perform their particular task of assembling the appropriate components in the appropriate patterns.
From page 298...
... To be adopted successfully as an industrial technology, however, DNA self-assembly faces challenges similar to solution-based exhaustive search DNA computing: a high error rate, the need to run new laboratory procedures for each computation, and the increasing capability of non-DNA technologies to operate at nanoscales. For example, while it is likely true that current lithography technology has limits, various improvements already demonstrated in laboratories such as extreme ultraviolet lithography, halo implants, and laser-assisted direct imprint techniques can achieve feature sizes of 10 nm, comparable to a single DNA tile.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.