Jump to Content

Collection Guide
Collection Title:
Collection Number:
Get Items:
Guide to the Stanford University, Department of Computer Science, Stanford Computer Forum Distinguished Lecture Series, Videorecordings
V0058  
View entire collection guide What's This?
PDF (173.71 Kb) HTML
Search this collection
 
 
Table of contents What's This?

Collection Contents

 

Original accession

Box 1

V58.1 Mainframe CPU Design with LSI and VLSI, by Dr. Gary Tjaden. 8 Nov. 1978

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A new design approach, called Multiple-Microinstruction- StreamlSingleMacroinstruction- Stream, has been shown to yield mainframe performance using offthe- shelf LSI modules. Design with VLSI does not lend itself to the use of off-theshelf modules, but a very orderly design, such as encouraged with this new approach, is required. This talk describes some of the more subtle aspects of designing with LSI and discusses the new considerations necessary for designing with VLSI.
Box 1

V58.2 Fibernet: Multimode Optical Fibers for Local Computer Networks, by Dr. Robert M. Metcalfe. 6 Dec. 1978

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Local computer networks which communicate over copper conductors have been developed both to promote resource sharing and provide increased performance. Such networks typically operate at bandwidth x length (Bw*L) products up to a few MHz*km. Here we consider the use of fiber optics in such networks, and give a status report on a star-configured fiber optic network experiment called Fibernet which operates at a B w*L product of -100 MHz*km at a data rate of 150 Mbits/s and which in its final phases with connect up to 19 stations. We compare the merits and problems for linear, ring and several star configurations, and of active versus passive networks.
Box 1

V58.3 Dataflow: Architecture and Language, by Prof. Charles S. Wetherell. 18 April 1979

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Large scientific computations require not just greater throughput from computing systems, but also faster response time. The dataflow architecture with run faster by utilizing any army of cheap, slow, easily replicatable processing units. This novel design also forces some novelties into programming languages used on the dataflow machine. Particularly important are explicit exposure of parallel computations and the complete avoidance of side effects. The new architecture, the associated language, and the effects on scientific applications are all discussed.
Box 1

V58.4 A Proposed Standard for Floating-Point Arithmetic, by Dr. John Palmer. 1 Nov. 1978

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A standard for floating-point arithmetic is under serious consideration by an IEEE standards committee. INTEL has implemented a subset of the proposed standard in hardware and software. The proposed standard is explained along with some evaluation of its benefits and costs.
Box 1

V58.5 The Mu-Net: A Scalable Multiprocessor Architecture, by Prof. Stephen Ward. 17 Jan. 1979

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A research program leading to tools and methodology for dealing with a wide spectrum of real-time computation requirements is described. Two major foci of this research are CONSORT, a very high-level programming language specialized to applications imposing hard real-time constraints, and the Mu-Net, a mUltiprocessor run-time system whose performance characteristics are transparently scalable over a wide range.
Box 1

V58.6 Design Tradeoffs of the VAX 111780, by Dr. David M. Plummer. 14 Feb. 1979

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

David Plummer provides an overview of the evolutionary design process involved in extending the PDPII into the VAX 111780. A number of the interesting architectural features of the machine were discussed within the context of compromises and tradeoffs in performance, cost and utility. These topics include the virtual memory system, variable length instruction formats, bit efficiency, uniform software interfaces and the distribution of architectural components to achieve better performance.
Box 2

V58.7 Intel 2920 Analog Microprocessor: Architecture and Applications by Dr. Ted Hoff. 10 Oct. 1979

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The 2920 is a single chip digital microprocessor which contains analog to digital and digital to analog conversion. This processor may be used to implement a variety of traditional analog functions such as modems, speech synthesizers and filter arrays.
Box 2

V58.8 The Motorola 68000 Microprocessor, by Dr. Edward Stritter. 31 Oct. 1979

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The semiconductor industry has translated advances in LSI technology into a new generation of very high performance microprocessors. One such processor is the Motorola 68000. This presentation discusses the 68000 from the points of view of the programmer, the system designer and the computer architect. The instruction set, register set and the bus structure is described. In addition, the implementation and internal structure of the chip is discussed, particularly in terms of performance goals and LSI technology constraints.
Box 2

V58.9 Secrets of High Performance Processor Design, by Dr. Thomas M. McWilliams. 9 April 1980

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk reveals a number of major secrets central to the design of modem top-of-theline digital processors. It treats several of the basic techniques and circuits which performance-oriented designers currently use to realize maximum throughput implementations of a given architecture, and discusses the impact of several common architectural features on the design of very high performance machines. The throughput-enhancing techniques used in the designs of the S-1 Mark I and Mark IIA processors are used extensively as illustrations of these design principles and practices.
Box 2

V58.10 An Implementation of PLII Subset G for Limited Resource Computers, by Dr. Gary A. Kildall. 16 April 1980

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

PLII-80 is an application language for microprocessor based computers which use the CP/M. operating system, and is based upon the new ANS PLII General Purpose Subset, called Subset G. The subset includes the more significant application oriented features, while eliminated the seldom-used or redundant constructs. the resulting language is simpler to learn, more consistent in form, and easier to compile. The PLII- 80 translator, discussed here, is a three pass optimizing compiler which operates under CP/M. in a 38 kilobyte area. the significant features of Subset G are presented, along with a number of implementation techniques used to reduce memory requirements while providing a measure of code improvement during the code generation phase
Box 2

V58.11 Execute Only" Program Memories by Dr. Marc T. Kaufman. 7 May 1980

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A new technique has been developed that allows computer programs to be protected from copying, while still allowing them to be executed. The method is applicable to general purpose computers, including hobby computers.
Box 2

V58.12 Experience with an Applicative String Processing Language, Dr. James Morris. 28 May 1980

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Experience using and implementing the language Poplar is described. The major conclusions are: applicative programming can be made more natural through the use of built-in iterative operators and post-fix notation. Clever evaluation strategies, such as lazy evaluation, can make applicative programming more computations efficient. Patte:n matching can be performed in an applicative framework. Many problems remaIn.
Box 3

V58.13 Architecture of the MIT LISP Machine, by Thomas F. Knight. 12 Nov. 1980

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The LISP machine is a general purpose emulation engine tailored to the execution of compiled LISP. It was designed in 1975 as the hardware basis for a completely new implementation of LISP in a personal machine environment. The LISP machine, like other personal machines, has an integrated display, keyboard/mouse, and high bandwidth local network. In the light of the now familiar nature of these components, this talk concentrates on the design and architecture of the emulation engine, and the features which make it especially well suited for execution of compiled LISP. These include fast typed object dispatcher and hardware for assisting the Baker real time garbage collector algorithm.
Box 3

V58.14 The Transaction Concept: Virtues and Limitations, by Dr. James N. Gray. 13 May 1981

The Transaction Concept: Virtues and Limitations, by Dr. James N. Gray., 13 May 1981

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Transactions are transformations of state which have the properties of atomicity (all or nothing), durability (effects survive failures) and consistency (they implement the correct transformations). The transaction concept is key to the structuring of data management applications. The concept seems to have applicability to programming systems in general. This talk restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require further study: (1) the integration of the transaction concept with the notion of abstract data type, (2) some techniques to allow transactions to be composed of subtransactions, and (3) handling transactions which last for extremely long times (days or months).
Box 3

V58.15 A RISCy Approach to Microcomputer Technology, by Prof. David A. Patterson. 17 May 1981

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The Reduced Instruction Set Computer (RISC) Project investigates an alternative to the general trend towards computers with increasingly complex instruction sets. By a judicious choice of a proper set of instructions and the design of a corresponding architecture, one can obtain a machine with a high effective throughput. The simplicity of the instruction set and addressing modes allows most instructions to execute in a single machine cycle, and the simplicity of each instruction guarantees a short cycle time. In addition such a machine should have a much shorter design time. This computer is being implemented as part of a four quarter sequence of graduate courses in which students propose and evaluate architectural ideas, design VLSI components, integrate these components into a single chip, and finally test the chips that they have designed. We are nearing the end of the third phase. After presenting the architecture and our design aids, we describe how the goal of the project evolved to building a computer the size of a Motorola 68000 microprocessor with the performance of a V AX 11 minicomputer.
Box 3

V58.16 The Polycyclic Architecture, by Dr. B.R. Rau. 18 Nov. 1981

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A horizontal architecture uses a wide instruction word to simultaneously control multiple resources. Such architectures offer the potential for high performance scientific computing at a modest cost. If this potential performance is to be realized the multiple resources of a horizontal processor must be scheduled effectively. The scheduling task for conventional horizontal processors is quite complex and the construction of highly optimized compilers for them is a difficult and expensive project. The polycyclic architecture is a horizontal architecture with support for the task scheduling by using delay elements (storage embedded within the interconnection scheme).
Box 3

V58.17 Translation Buffer Performance in the V AX-lII780, by Dr. Douglas Clark. 10 Nov. 1982

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The V AX Translation Buffer (TB) is a cache of recently used virtual-to-physical address translations. In this talk Dr. Clark presents the results of some measurements of the TB in a VAX -111780. The measurements were made with a hardware monitor and the VAX was measured during ordinary timesharing use as well as under a reproducible synthetic workload. Results include the miss ratio of the TB for both Process space and System space, the TB-flushing context switch frequency, and the amount of time spent servicing TB misses. Additional measurements were made with half the TB disabled.
Box 3

V58.18 PSL: A Portable LISP System, by Prof. Martin L. Griss. 26 Jan. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

PSL is a new portable implementation of LISP, based on the earlier Standard LISP systems from Utah. The system is distinguished by having essentially all of the interpreter and compiler written in PSL itself, compiled by an optimizing compiler. PSL runs on the DEC System 20, the VAX under UNIX, and a variety of Motorola 68000 based systems. Implementations for the CRA Y -1 and IBM-370 are underway. The talk describes the design philosophy and current status of PSL.
Box 4

V58.19 VISION TM, A Multi-Window Operating Environment for Personal Computers, by William T. Coleman. 2-Mar-83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

VisiON is a multi-window operating environment which uses a mouse pointing device. It has been developed so as to be highly portable across personal computers. The system will be demonstrated on the IBM Personal Computer, followed by an overview of the design objectives and the design of the system.
Box 4

V58.20 Life at Ground Zero, by Dr. Peter Stoll. 30 March 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The design of integrated circuits is not yet a stable art, let alone a science. The explosion in scale of the designs changes the very nature of the design problem. Even at the electrical performance level, formerly negligible problem such as lead inductance assume importance. Advantages in many areas argue for the comprehensive use of CMOS for new logic circuit designs.
Box 4

V58.21 Hierarchical Digital Design Methods, by Dr. Glen G. Langdon. 13 April 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A hierarchical digital design method which has been used in practice and in teaching a course in computer design will be presented. A relatively simple computer CPU presents an interesting level of complexity which filters out design techniques which "run out of steam" in complex digital systems. The course requires a "paper design: project of a CPU with an 10 bus, interrupt system, indirect addressing, and a studentarchitected subroutine call. The project motivates the student to reduce the complexity of herlhis task by learning a systematic hierarchical approach. Hierarchical levels of a Computer System are described. Digital design deals with three conceptual levels: (1) the instruction set architecture, (2) the functional units of the data flow, and (3) the basic gates and flip-flops. Hierarchical design bridges these conceptual gaps using a combination of strategies: top-down (divide-and-conquer) and bottom-up (construct more powerful functional units). Design is an interactive process, at each level the most important problem is attacked first and the detail is postponed. As the important decisions are made, and the detail is filled in at a lower level, it is usually necessary to modify the design. The data flowlcontrol unit level, the data part and control unit part are treated separately. The data flow involves bussing, registers, and functional units such as MUXs (assemblers), ALUs and shifters. Concurrences classes within the data flow add to the performance. The control unit determines the behavior of the system by sequencing the data flow operations. An important third component to design at this level is the system timing A two-dimensional (2nd level) diagram is advantageous to describe the data flow. A sequence of micro operations (register-transfer statements) is convenient to describe the control flow. System timing is best described with timing charts.
Box 4

V58.22 Digital Printing and Typesetting, by Prof. Brian K. Reid. 1 June 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Display technology has advanced significantly in recent years; it is now easy to build a workstation with a 1000 by 1000 color display. The technologies for printing those images on paper are significantly less advanced. I [with] begin a quick outline of the various technologies for making printed images and the way that they are currently used. These devices can be uneasily classified into three categories: plotters, typesetters, and laser printers. I explain how each such device works and what its limitations are. The future of printing technology clearly belongs to raster technologies. Raster Plotters such as the Versatec, raster typesetters such as the Alphatype, and raster laser printers such as the Canon have been available for some years, but technologies are developing that will permit laser printing devices, explaining their construction, operation, and control. I conclude with a brief description of the Stanford "Folio" laser printer project, which is the design of an ultrahigh-speed laser printer controller for use with the SUN workstation.
Box 4

V58.23 Machines Who Learn, by Prof. Douglas B. Lenat. 15 Nov. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The major bottleneck in building large AI programs is knowledge acquisition, and one way of widening that is by having the program learn new knowledge automatically. There has been an explosion of interest and activity in machine learning in the past few years. This talk focuses on my own efforts over the past decade to build machines who learn by discovery, guided by a large body of informal heuristic rules. Some of these rules suggest plausible concepts to consider at each moment, some evaluate candidates for interestingness. The domains in which our latest program, EURISKO, explores are quite diverse: set theory, number theory, plumbing, naval ship design, and the design of three-dimensional VLSI circuits. Many of our results are exciting and some are quite humorous.
Box 4

V58.24 Machines Who See and Do, by Prof. Rodney A. Brooks. 22 Nov. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Biological evolution has produced many animals capable of various levels of vision, mobility and manipulative skill. How can we give robots the same abilities? This talk overviews three examples of robot programs in model-based vision, 3-d path planning, and assembly plan checking, and shows a common methodology linking them. In each case a continuous class of possibilities is assumed while a physical process or event is modeled and then "run backwards" to solve the perceptual or planning problem
Box 5

V58.25 The MIPS Project, by Christopher Rowen. 16 Nov. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

VLSI technology provides new constraints and challenges for computer architects. Silicon can provide low cost and high performance only if the hardware resources of the machine are carefully chosen and heavily used. The MIPS project at Stanford explores the interaction of VLSI with new architectural ideas. Like the Berkeley RISC and IBM 801 projects, MIPS uses a streamlined instruction set. Its heavily pipelined micro architecture ensures high utilization of silicon resources. It relies on advanced compiler technology to shift much complexity of hardware into software. Our nMOS VLSI implementation capitalizes on the natural regularity of the architecture to create a working processor several times the speed of commercial microprocessors. This talk describes the architecture and implementation of MIPS.
Box 5

V58.26 Software Army on the March -Project Strategies and Tactics, by Dr. John R. Mashey. 28 Sept. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk describes the work of an army building roads as an analogy to software projects. Decision-making processes are examined from two different viewpoints. The first is the formal game theory viewpoint - making decisions in a non- deterministic, multi-stage, non-zero-sum game played with incomplete information. The second is the "army" model. From this viewpoint are discussed such issues as fighting the right war in the first place and choosing good routes to reach the goals; the need for scouts on motorcycles ("fast prototypers"); how campaigns differ, and thus affect the choice of troops; special precautions for earthquake country; getting natives to buy and drive your trucks, instead of shooting your tires out as you drive through their villages. There exist many similarities in the decision processes of formal game analysis, military planning, and project management. The talk uses the first and second to help shed light on the third.
Box 5

V58.27 Modula-2, by Prof. Niklaus Wirth. 4 Oct. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Modula-2 is a high-level programming language, coming after Pascal and Algol, which has been in use since 1979. The whole program is a module, which justifies the name of the language. One of the most useful aspects of Modula-2 is the concept of local objects which can be hidden but made to reappear. Modula-2 has greater capabilities than Pascal in its more regular syntax, procedure variables, and low-level programming abilities. Professor Wirth also discusses the powerful and versatile compiler he built in conjunction with the development of Modula-2.
Box 5

V58.28 Etherphone-Ethernet Voice Service, by Dr. Lawrence C. Stewart. 7 Dec. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk describes the architecture and initial implementation of an experimental telephone system developed by the Computer Science Laboratory at the Xerox Palo Alto Research Center. A specially designed processor (Etherphone) connects to a telephone instrument and transmits digitized voice, signally, and supervisory information in discrete packets over the Ethernet local area network. When used by itself, an Etherphone provides the functions2 of a conventional telephone, but it comes into its own when combined with the capabilities of a nearby office workstation, a voice file service, and other shared services such as databases. Most of the work so far has gone into the basic provisions for voice switching and transmission. Today the system supports ordinary telephone calls and simple voice message services. These functions will be expanded as more is learned about the integration of voice with experimental office systems.
Box 5

V58 .29 The Bagel: A Systolic Concurrent Prolog Machine, by Prof. Ehud Shapiro. 4 Jan. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

It is argued that explicit mapping of processes to processors is essential to effectively program a general-purpose parallel computer, and, as a consequence, that the kernel language of such a computer should include a process-to-processor mapping notation. The Bagel is a parallel architecture that combines concepts of dataflow, graphreduction, and systolic arrays with Turtle programs as a mapping notation. Concurrent Prolog, combined with Turtle programs, can easily implement systolic systems on the Bagel. Several systolic process structures are explored via programming examples, including linear pipes (sieve of Erasthotenes, merge sort, natural-language interface to a database), rectangular arrays (rectangular matrix multiplication, (band-matrix multiplication, dynamic programming, array relaxation), static and dynamic H-trees (divide-and-conquer, distributed database), and chaotic structures (a herd of Turtles). All programs have been debugged using the Turtle graphics Bagel simulator which is implemented in Prolog.
Box 5

V58.30 A Perspective on Automatic Programming, by Dr. David R. Barstow. 7 Feb. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Most work in automatic programming has focused primarily on the roles of deduction and programming knowledge. However, the role played by knowledge of the task domain seems to be at least as important, both for the usability of an automatic programming system and for the feasibility of building one that works on non-trivial problems. This perspective has evolved during the course of a variety of studies over the last several years, including detailed examination of existing software for a particular domain (quantitative interpretation of oil well logs) and the implementation of an experimental automatic programming system for that domain. The importance of domain knowledge has two important implications: a primary goal of automatic programming research should be to characterize the programming process for specific domains; and a crucial issue to be addressed in these characterizations is the interaction of domain and programming knowledge during program synthesis.
Box 6

V58.31 Reduced Instruction and Fast Operand Access for General-Purpose Microprocessors, by Prof. Manolis Katevenis. 15 Feb. 1984.

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Even in VLSI, the transistors available on the limited chip area constitute a scarce resource when used for the implementation of a complete processor or computer, and thus, they have to be used effectively. General-purpose computations consist mostly of simple operations performed on frequently accessed operands, many of which are the few local scalar variables of procedures. The recent trend in general- purpose von Neumann computer architecture towards instruction sets of increasing complexity leads to inefficient use of the scarce on-chip resources. The alternative, Reduced Instruction Set Computers (RISC), make more effective use of on-chip transistors by using them in other functional units that do support frequent operations. The RISC II processor design and implementation has investigated the architectural implications of these observations. RISC II features simple instructions (like MIPS), and a large multi-window register file whose overlapping windows are used for holding the arguments and local scalar variables of the most recently activated procedures. The RISC II single-chip processor looks different from other popular commercial processors: it has fewer total transistors; it spends only 10% of the chip area for control, rather than 112 to 2/3; it has 138 32-bit registers; and it required about five times less design and lay-out effort to get chips that work correctly and at speed on first silicon. RISC II can execute integer, high-level language programs significantly faster than other processors made in similar technologies.
Box 6

V58 .32 Algorithm Animation, by Prof. Robert Sedgewick 29 Feb. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The central thesis of this talk is that it is possible to expose fundamental characteristics of computer programs through the use of dynamic (real time) graphic displays, and that such algorithm animation has the potential to be useful in several contexts. Recent research in support of this thesis is described, including the development of a conceptual framework for the process of animation, the implementation of a software environment on high-performance graphics-based workstations supporting this activity, and the use of the system as a principal medium of communication in teaching and research. Animated numerical, sorting, searching, string processing, geometric, and graph algorithms are described and illustrated.
Box 6

V58.33 Recent Progress on the Factoring Problem, by Prof. H.C. Williams. 6 March 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Given an integer N which is not prime, the problem of finding a,b> 1 such that N=ab is called the Factoring Problem. This problem has been studied for many centuries but the most significant progress on it has occurred only recently. New research has been stimulated by the invention of an elegant two key Cryptosystem (called RSA) which depends for its security on the presumed difficulty of the factoring problem. In this talk several methods of factoring are briefly discussed. These include some exciting new developments such as the recent implementation of the Quadratic Sieve, the development of special purpose hardware, and the proposed use of the Massively P~allel Processor. Some indication of the future progress on this problem is also given.
Box 6

V58.34 Designing Programs for Foreign Environments, by Steven H. Rempell. 16 May 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

In order to survive in a highly competitive environment, software designers are finding it more and more imperative to take into account international requirements. This talk details many of the stumbling blocks to designing software that is translatable as well as flexible to local customs. Especially challenging are the foreign character sets. There are eleven different alphabets in Western Europe alone, each with its own keyboard arrangement, and 6000 characters of Japanese kanji. Conventions of name order, monetary systems, ad meanings of punctuation marks and symbols are a few of the other differences found between countries.
Box 6

V58.35 The Design of a Capability-Based Distributed Operating System, by Prof. Andrew S. Tanenbaum. 22 May 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The Computer Systems Group at the Vrije Universiteit is currently engaged in constructing a highly modular distributed operating system designed to run on a collection of 680oos, 8086s, and similar machines connected by a 10 Mbps tokenpassing ring. The operating system, Amoeba, has a very small kernel that supports multiprogramming and message passing. The rest of the operating system, including the file system, (most of the) memory management, process management, resource management, and so on, runs outside the operating system as user processes. The system is based on the client-server model, using a form of cryptographically protected capabilities to implement an object-oriented system. The talk discusses the overall design of the system and its distributed environment, how capabilities are managed in a protected way by untrusted user processes, the structure of a multiserver file system supporting multiple versions of files using optimistic concurrency control, and various other conceptual and implementation issues.
Box 6

V58.36 Making Smalltalk SOAR: Smalltalk on a RISC, by Prof. David A. Patterson. 23 May 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The SOAR project is a multi-month project to build a simple, Von Neumann computer that is designed to execute the Smalltalk-80 system faster than traditional computers. The Smalltalk-80 system is a highly productive programming environment, but poses tough challenges for implementors: dynamic data typing, a high level instruction set, frequent and expensive procedure calls, and object-oriented storage management. SOAR compiles programs to a low-level, efficient instruction set. Parallel tag checks permit high performance for the simple common cases and cause traps to software routines for the complex cases. Parallel register initialization and multiple on-chip register windows speed procedure calls. Sophisticated software techniques relieve the hardware of the burden of managing objects. Initial evaluations of the effectiveness of the SOAR architecture were arrived at by compiling and simulating benchmarks, and a 35,200-transistor NMOS chip is being fabricated. The early results suggest that a Reduced Instruction Set Computer can provide high performance in an exploratory programming environment.
Box 7

V58.37 The Search for Randomness, by Prof. Persi Diaconis. 29 May 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Computers simulate randomness by using simple recursive procedures such as congruential random number generators. Popular generators are surveyed along with recent work by George Marsaglia which proposes some new, simple tests which most good generators fail. These tests are sufficiently close to real world uses to give cause for alarm.
Box 7

V58.38 The Human Factor: Designing Computer Systems for People, by Dr. Dick Rubinstein. 5 June 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

What principles of human factors--and what just plain good ideas--may be applied to the design of quality user interfaces? Dr. Rubinstein advances a number of principles and ideas that can make computer systems better for all users, including examples from the V AXstation user interface.
Box 7

V58.39 Designing High Performance MOS Circuits, by Mark Horowitz. 3 Oct. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

To create a successful chip, it must not only function, but it must achieve a certain performance goal. This talk will describe some techniques used in industry to produce high performance MOS circuits, and will focus on CMOS design. The first part of the talk will review the basic issues in circuit design and point out the tradeoffs a circuit designer must make. The second part will focus on some general approaches to achieve high performance. This section will include precharge logic, domino logic, sense amplifiers, buffer sizing and others. Parts of the MIP-X processor will be used as example circuits.
Box 7

V58.40 Performance of Remote File Access and Processor Load Sharing In Locally Distributed Systems, by Prof. Edward D. Lazowska. 9 Oct. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk describes two recent studies of design alternatives for locally distributed systems. The first study, conducted with John Zahorjan of the Univ. of Washington and David Cheriton and Willy Zwaenepoel of Stanford, considers the performance of single-user workstations that access files remotely from a shared file server. Our approach is to use the results of measurement experiments to parameterize queueing network performance models, which then are used to assess performance under load and to evaluate design alternatives. The second study, conducted jointly with Derek Eager of the Univ. of Toronto and John Zahorjan, addresses two fundamental questions about load sharing in homogeneous distributed systems: Whether the potential performance improvements can be realized in light of the overhead required, and whether system-wide state information is needed in making decisions about where to relocate work moved from a an overload node. Our approach is to use analytic models of artificial load sharing policies that optimistically and pessimistically bound the performance of realistic schemes. Our major conclusions are: load sharing results in improvements over a wide range of system loads, and so is a viable technique; very little is gained by maintaining extensive state information.
Box 7

V58.4l Computation and Communication in R*: A Distributed Database Manager, by Bruce G. Lindsay. 30 Oct. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk presents and discusses the computation and communication model used by R *, a prototype distributed database management system. An R * computation consists of a tree of processes connected by virtual circuit communication paths. The process management and communication protocols used by R * enable the system to provide reliable, distributed transactions while maintaining adequate levels of performance. Of particular interest is the use of processes in R * to retain user context from one transaction to another, in order to improve the system performance and recovery characteristics.
Box 7

V58.42 A Mixed VoicelData Network, by John Wakerly. 31 Oct. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The DAVID Information Manager carries voice, RS-232 data, and high-speed packets in a local area network using traditional telephone twisted-pair wiring in an office environment. This talk outlines the technical requirements for voice, data, and then describes the major hardware elements of the DA VID architecture. The talk includes brief descriptions of key topics in digital telephony, including pulse-code modulation, time-division multiplexing, circuit switching, and ping-pong transmission.
Box 8

V58.43 The Architecture of the Convex I -a 64-Bit Affordable Supercomputer, by Steve Wallach. 7 Nov. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This discussion will present the instruction set, internal register architecture, memory structure, and I/O structure of the Convex 1. There will also be a discussion on the optimizing and vectorizing FORTRAN compiler as well as the Convex version of 4.2 UNIX.
Box 8

V58.44 Implementing a Cache Consistency Protocol,by Prof. Randy Katz. 20 Nov. 1984.

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

We present an ownership-based multi-processor cache consistency protocol, designed for implementation by a single-chip VLSI cache controller. The protocol and its VLSI realization are described in some detail, to emphasize the important implementation issues in particular, the controller critical sections and the inter- and intra- cache interlocks needed to maintain cache consistency. The design has been carried out to layout in a p-well CMOS technology.
Box 8

V58.45 A Fast Compiler for Modula-2, by Prof. Niklaus Wirth. Edited copy. 15 Jan. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

One-pass compilations has the advantage of being considerably faster than many-pass compilation, because it avoids the generations, storage, retrieval, and analysis of interpass codes On the other hand, it requires a relatively large main store. Since nowadays even "small" computers tend to have a considerable amount of memory, onepass compilers regain significance. We compare a new Modula-2 compiler with existing ones and describe its main structure. In particular, also some problems and their solution connected with separate compilation are discussed.
Box 8

V58.46 The Potential State of Computer and Information Technology in the Year 2000, by Prof. Stephen Lundstrom. Edited copy. 23 Jan. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

At the request of NASA, the Aeronautics and Space Engineering Board (ASEB) of the National Academy of Science convened a workshop to assess the aeronautics technology possibilities by the year 2000. This workshop included panels to consider future aircraft system concepts, as well as the supporting disciplines of structures, aerodynamics, propulsion, human factors, materials, and guidance, navigation, and control. Since all of these disciplines are, or will be, highly dependent on advanced computer and information technology, a separate panel was convened to consider this area. The information presented in this talk is the result of the joint deliberations of the group of industry, university, and government experts from a wide range of areas who worked on the panel. The objective of these deliberations was to project what computer and information technology would be possible by the year 2000 if sufficient resources are made available. While the collective panel assembled had a very broad background, the conclusions reached were not unanimous, especially regarding the extent which various disciplines would be capable of progressing over this time frame. Undoubtedly, new research areas, unknown to the panelists, will emerge during this time frame. The projections given in the report are the honest options of most of the group assembled and while they cannot be supported with any quantitative analyses, they do represent an outlook to the future which is as realistic as possible. The topics considered by the panel, and reported in this talk, include: Microelectronics and Components, Processors (including scientific supercomputers and airborne distributed systems), Software, Artificial Intelligence and Expert Systems, ManMachine Interfaces, Systems Development and Implementation, and High Integrity Systems.
Box 8

V58.47 Knowledge Based Software Development in FSD, by Prof. Robert Balzer. Edited copy. 25 Feb. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Our group is pursuing the goal of an automation based software development paradigm. While this goal is still distant, we have embedded our current perceptions and capabilities in a prototype (FSD) of such a software development Although this prototype was built primarily as a testbed for our ideas, we decided to gain insight by using it, and have added some administrative services to expand it from a programming system to a computing environment currently being used by a few lSI researchers for all their computing activities. This "AI operating System" provides specification capabilities for Search, Coordination, Automation, Evolution, and Inter-User Interaction. Particularly important is evolution, as we recognize that useful systems can only arise, and remain viable, through continued evolution. Much of our research is focused on this issue and several examples will be used to characterize where we are today and where we are headed. Naturally, we have started to use these facilities to evolve our system.
Box 8

V58.48 The USC Expert Synthesis System, by Prof. Alice Parker. Edited copy. 6 March 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk presents a brief overview of the USC Expert Synthesis System, which is part of the ADAM (Advanced Design AutoMation) project. This system supports interactive specification of digital system design problems, and provides a framework for both interactive and automatic design synthesis of digital integrated circuits from specifications of the black box behavior Topics covered include the overall system structure, the representation of design information, and two specific synthesis problems -- the synthesis/verification of data path designs and the synthesis of clocking schemes. In addition, abstract system specification is covered briefly.
Box 9

V58.49 The Theory and Application of Non-First-Normal-Form Relations, by Prof. Henry Korth. Edited copy. 12 March 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

In this talk, we present a formal model for the study of non- INF relational databases. A formal relational calculus is defined and this calculus is proved equivalent to an extension of the standard relational algebra. We define a new normal form for nonINF relations and define several new algebraic operators that preserve this normal form. In the ROSI (Relational Operating System Interface) project at the University of Texas, we are using our model of non-lNF relations to construct a relational user interface to an operating system environment. The ROSI query languages currently being designed include an extended version of SQL for non-lNF relations, and a graphical query language based on an extension to universal-relation languages. In this talk, we present a preliminary design of these languages.
Box 9

V58.50 The Status. Trends and Challenges in Information Storage Technology, by Prof. A.S. Hoagland. Edited Copy. 13 March 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Magnetic recording has played the predominant role in meeting the growing needs for data storage for over 30 years, and during this period storage density consistently has doubled about every two years. Yet we are just now witnessing an explosive growth in the application of this technology for the storage of information, and technological advances are beginning to accelerate. At the same time, optical storage is just beginning to be introduced in commercial storage devices at much higher densities than currently available. However, density itself may be of much less significance in the long term than functional features related to usage. The evolution of information of storage technologies and devices will be traced and their future prospects will be discussed. The potential competitive and complementary roles of magnetic and optical storage will be related to emerging applications. This talk is a quick overview of a subject which is to be covered in a much more extensive and comprehensive form in a series of videotapes being prepared by the Institute for Information Storage Technology at the University of Santa Clara that are to be made generally available to industry
Box 9

V58.51 On the Sequential Nature of Unification, by Prof. Cynthia Dwork. 9 April 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Unification of terms is a crucial step in resolution theorem proving, with applications to a variety of symbolic computation problems. It will be shown that the general problem is log-space complete for P, even if infinite substitutions are allowed. Thus a fast parallel algorithm for unification is unlikely. More positively, term matching, an important subcase of unification, will be shown to have a parallel algorithm requiring a number of processors polynomial in n, the size of the input, and running in time polylogarithmic in n. This talk assumes no familiarity with unification or its applications
Box 9

V58.52 Graphics for User Interfaces, by Prof. Vaughan Pratt. 16 April 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Responsive user interfaces need high performance high quality graphics support. I shall describe recent research at Sun addressing the graphics needs of text, splines, and windows.
Box 9

V58.53 The GEM Programming EnvironmentlUser Interface, by Timothy Oren. Edited master; Umatic dub." " 24 April 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

GEM (Graphics Environment Manager), a program created at Digital Research, Incorporated, brings the familiar menu/ icon! windowing environment of the Xerox star to the IBM PC, the Atari ST series, and other personal computers. Unlike other environments of this type, GEM provides a large degree of source code compatibility between different machines and machine configurations, allowing applications to be ported with relatively little effort.
Box 9

V58.54 Statistical Mechanics of Concurrency, by Prof. Yechiam Yemini. 7 May 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Consider a set of agents involved in a concurrent access to a shared distributed resource. Sharing is subject to interference constraints that define the set of admissible configurations of concurrent activities. Examples of such interference systems include transactions sharing a database, packet transmissions sharing the ether in a packet radio network and interprocessor communications sharing a switching network. The performance behavior of such systems has resisted traditional analysis techniques. I will present a statistical mechanics approach to the analysis of interference systems. This approach facilitates solutions of complex distributed systems, offers physical insights to their behavior and potentially useful tools in their design using physical performance measures (e.g., "pressure" of a data base), and provides access to a wide body of methods of physics to solve such systems.
Box 10

V58.55 Postscript, by Prof. Brian K. Reid. Edited Master. 22 May 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

PostScript is an executable language for the description of graphical images. It was developed by a local company, Adobe Systems, and is implemented in several commercially-available laser printers and phototypesetters.
Box 10

V58.56 Transparency in Distributed Operating Systems, by Prof. Gerald 1. Popek. Edited master. 5 June 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

invisible, to both people and programs. Access to all resources, bey they data, tasks, devices, services, etc. is the same, independent of the location of the requestor or the resource. This concept is akin to the manner by which virtual memory in operating systems hides the distinction between main and secondary storage, or the way data independence in database systems separates the means of information access from access paths, storage structures, etc. Denning argues that transparency is an important, fundamental design principle. Consider a distributed computing environment composed of thousands of computing nodes (workstations, mainframes, etc.), with perhaps different processor types, and even different operating systems in some cases. Is it feasible to provide a high level of transparency? Is it desirable? These issues will be raised, resulting architectural and design problems outlined, and the current state of their solutions in the Locus distributed operating system family discussed. Some of the problems include name service in a very large scale system, CPU and data type heterogeneity, and integration of foreign operating systems.
Box 10

V58.57 Belief. Awareness, and Limited Reasoning, by Dr. Joe Halpern. Edited master. 1 Oct. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Classical possible-worlds models for knowledge and belief suffer from the problem of logical omniscience: agents know all tautologies and their knowledge is closed under logical consequence. This unfortunately is not a very accurate account of how people operate! We review possible-worlds semantics, and then go on to introduce three approaches towards solving the problem of logical omniscience. In particular, in our logics, the set of beliefs of an agent does not necessarily contain all valid formulas. One of our logics deals explicitly with awareness, where, roughly speaking, it is necessary to be aware of a concept before one can have beliefs about it. Another gives a model of local reasoning, where an agent is viewed as a society of minds, each with its own cluster of beliefs, which may contradict each other. The talk with be completely self-contained.
Box 10

V58.58 Challenges and Directions in Fault-Tolerant Computing, by Dr. Jack Goldberg. Edited master. 2 Oct. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Two decades of theoretical and experimental work and numerous recent successful applications have established fault tolerance as a standard objective in computer system design. As with the objective of correctness, and in contrast to the objective of high speed, satisfaction of fault tolerance requirements cannot be demonstrated by testing alone, but requires formal analysis. Most of the work in fault tolerance has been concerned with developing effective design techniques. Recent work on reliability modeling and formal proof of fault tolerant design and implementation is laying a foundation for a more rigorous design discipline. The scope of concern has also expanded to include any source of computer reliability, such as design mistakes, in software, hardware, or at any system level. Current art is barely able to keep up with the rapid pace of computer technology, the stresses of new applications and the new expansion in scope of concerns. Particular challenges lie in coping with imperfections of the ultrasmall, i.e., high-density VLSI, and the ultra-large, i.e., large software systems. It is clear that fault tolerance cannot be "added" to a design and must be integrated with other design objectives. Simultaneous demands in future systems for high performance, high security, high evolvability and high fault tolerance will require new theoretical models of computer systems and a much closer integration of practical design techniques. The talk will discuss the widening scope of research into computer dependability. New issues include tolerance of design errors (including software), operator errors, and the safety of computer-controlled systems.
Box 10

V58.59 CommonLoops--A Graceful Merger of Lisp and Object Oriented Programming, by Dr. Daniel G. Bobrow. Edited master. 6 Nov. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

CommonLoops merges the facilities of object oriented programming and Lisp. This talk will briefly describe the relevant features of the two styles of programming, and describe the unique properties of this merge. These include a uniform syntax for function calling and sending messages; amerger of the type space of Lisp and the class hierarchy of objects; a generalization of method specification that includes ordinary Lisp functions at one extreme, and fully type specified functions at the other; and a "metaclass" mechanism that allows tradeoffs between early binding and ease of exploratory programming in the implementation of objects.
Box 10

V58.60 Perceptual Organization and the Representation of Natural Form, by Dr. Alex P. Pentland. Edited master. 19 Nov. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

To understand both perception and commonsense reasoning we need a representation that captures important physical regularities and that correctly describes the people's perceptual organization of the stimulus. Unfortunately, the current representations were originally developed for other purposes (e.g., physics, engineering) and are therefore often unsuitable. We have developed a new representation and used it to make accurate descriptions of an extensive variety of natural forms including people, mountains, clouds and trees. The descriptions are amazingly compact. The approach of this representation is to describe scene structure in a manner similar to people's notion of "a part," suing descriptions that reflect a possible formative history of the object, e.g., how the object might have been constructed from lumps of clay. For this representation to be useful it must be possible to recover such descriptions from image data; we show that the primitive elements of such descriptions may be recovered in an overconstrained and therefore reliable manner. An interactive "real time" 3-D graphics modeling system based on this representation will be shown, together with short animated sequences demonstrating the descriptive power of the representation.
Box 11

V58.61 A Knowledge-Based Approach to High-Level Program Optimization, by Prof. Allen Goldberg. 10 Dec. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Our concern is the compilation of high-level declarative programs into efficient Pascallevel implementations. Specifications are written as equational assertions over a settheoretic data domain. Compilation is achieved by semi-automatic application of source-to-source transformations. These transformations formalize knowledge about high-level optimization techniques required for the task. These techniques include finite differencing, loop fusion, algebraic simplification, symbolic evaluation, data structure selection, and store-vs-compute. The relevance of this work to the optimization of Prolog and functional programs will be discussed.
Box 11

V58.62 The GF11 Supercomputer, by Dr. John F. Beetem. Edited master. 21 Jan. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 3 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11 .52 Gflops. The processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle, any of 1024 pre- selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions.
Box 11

V58.63 Technology Trends of CMOS VLSI, Dr. Y oshio Nishi. 22 Jan. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

As a mainstream technology for the VLSI!ULSI era, CMOS technology has attracted increasing attention and absorbs a tremendous amount of R&D resources. This seminar will cover the historical background of CMOS technology, recent progress in VLSI CMOS technology, and application of CMOS technology to the latest VLSI devices. Also discussed will be several topics in device physics and fabrication process technology for submicron CMOS.
Box 11

V58.64 Juno, A Constraint-Based Graphics System, by Dr. Greg Nelson. Edited master. 4 Feb. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Juno is a system that harmoniously integrates a language for describing computer graphics images with a what-you-see-is- what-you-get image editor. Two of Juno's novelties are that geometric constraints are used to specify locations, and that the text of a Juno program is modified in response to the interactive editing of the displayed image that the program produces. A videotape of the program will be shown; the research was done at Xerox P ARC.
Box 11

V58.65 Sprite OS, by Prof. John Ousterhout. Edited Master. 12 Feb. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Sprite is a new network operating system being built at Berkeley as part of the SPUR workstation project. The talk will focus on three parts of Sprite: the filesystem, process offloading, and the virtual memory system. The filesystem will provide a single shared Unix-like file hierarchy distributed across several servers. Clients will use dynamically-constructed prefix tables to associate file names with servers. Sprite will include a process offloading mechanism that allows jobs to be run on idle workstations in the same way that jobs may be placed in background in Unix. The virtual memory system will be Unix-like with simple extensions to permit shared data segments and synchronization.
Box 11

V58.66 Lattices and Algorithms, by Prof. Laszlo Lovasz. Edited master. 18 Feb. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Given a lattice L in the n-dimensional space, we may ask several questions like "Which is the shortest non-zero vector in L?", "Which point of L is closest to a given point in the space?", etc. Many classical results in the "Geometry of Numbers" deal with these questions from a non-algorithmic point of view. However, algorithmic aspects of them only recently became objects of extensive study, to a large extent because of their applications in algebra, cryptography, combinatorial optimization, prime number theory, etc. Moreover, renewed interest in this field has resulted in substantial improvements of classical results.
Box 12

V58.67 Man Dies due to Poor Man-Machine Interface Design, by Chuck Clanton. Edited Master. 19 Feb. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Few dispute that interactions between computers and users requires thoughtful attention. Is this a communication of craft or idiosyncrasies of personal taste? Placebo effects and fundamental principles are difficult to distinguish. This talk recounts some of the inadequately controlled and poorly validated experiments by the speaker which lend insights into the current state of the art.
Box 12

V58.68 Cache Memories, by Alan Smith. Master. 2 April 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Cache memories are used in almost all large and medium size computers in order to reduce the average main memory access time. Cache memories are also in use with, or being built for (or on), high performance microprocessors. In this talk, we provide an overview of the issues in cache memory design, concentrating on some recent work by the speaker. Particular attention is given to cache workloads, cache consistency mechanisms, and the miss ratio as a function of line size and cache size. Other topic may include: cache fetch algorithms (demand vs. prefetch), placement (set associative, direct mapping, etc.) and replacement (LRU, FIFO, etc.) algorithms, store through vs. copy back updating of main memory, cold start vs. warm start miss ratios, the effect of input/output through the cache, virtual address caches, user/supervisor caches, multilevel cache, the behavior of split instruction! data caches, and translation lookaside buffers
Box 12

V58.69 Complexity Theoretic Aspects of Linear Programming, by Steven Smale. Master. 29 April 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

An exposition of some results on the speed of basic linear programming complementarity problems. Some relationship to Newton's method are also being developed
Box 12

V58.70 The H-P Precision Architecture, by Jerry Huck. Master. 30 April 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

HP's Spectrum Program has defined and is implementing a completely new family of computer systems using a RISC based architecture. This talk focuses on the processor organization to examine CPU resources, instruction set, and storage organization. Started by a small research group at HP's central labs, the concepts of RISC computer design were applied to a new architecture suitable for both commercial and technical applications. The architecture adheres to the basic tenet: "Make the most frequent functions go as fast as possible." Compiler writers, operating systems experts, and hardware designers iterated through a proposal-paper design cycle to optimize an architecture for maximum throughput. The first member of the family has been announced and the presentation discusses the organization and implementation of its CPU.
Box 12

V58.71 Synthesizing Realistic Textures by the Composition of Perceptually Motivated Functions and Programming Language Design, by Ken Perlin. Master. 6 May 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This research demonstrates a uniform functional composition framework for modeling and synthesizing complex textures. The appearance of a wide range of natural phenomena can be expressed and efficiently synthesized in this framework. Animation of texture is readily incorporated. Emphasis will be on explaining the properties leading to generality, expressivity, and efficiency. A system is described in which an image is approximated by a finite collection of .B samples, representing neighborhoods in the image. The user designs visual simulations of surface textures by constructing an algorithm that is to be independently computed at each image sample. Primitive functions are provided to allow control within the texture algorithm of such visually important texture properties as frequency and first order spatial statistics. The user proceeds by building from these functions. Feedback is provided by images indicating the state of any computed quantity over all samples. The system includes primitive functions that facilitate the manipulation of such visually discriminable qualities as brightness, contrast, coherent discontinuities, orientation, and features possessing restricted ranges of frequency. These primitive functions are used to build up composite functions that facilitate the manipulation of more sophisticated visual qualities. The system is applied to build the appearance of such textures as water, star fields, flame, marble, clouds, stucco, rock, smoke, and soap films. Major results are twofold. First I will show that a wide range of naturalistic visual textures can be constructed with this approach. Second, I will demonstrate specific functions that encode visual elements common to disparate visual textures.
Box 12

V58.72 The NCUBE Family of Parallel Supercomputers, by John Palmer. 14 May 1986.

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

NCUBE has developed a family of parallel computer systems based on a proprietary VLSI process that was designed specifi- cally for parallel computing. The NCUBE systems range from 8 MIPS (2 mflops) for a 4-node board to 2000 MIPS (500 rnfIops) in a 1,024 processor system. There are three versions of the NCUBE system: NCUBE/4: a 4-node board that plugs into an IBM PC-AT; NCUBEI7: an office environment system up to 128 nodes; NCUBEIlO: a true supercomputer with up to 1,024 nodes. In addition to high computational performance, the NCUBE family has a unique high performance I/O system. the I/O system is expandable from 1-8 channels each of which is capable of 180 mbytes/sec transfer rate. This allows for very high performance graphics support and parallel disk interfaces. The NCUBE family along with its software support is covered in the presentation. Also a COMPAQ with an NCUBE/4 is used in demonstration
Box 13

V58.73 Nanocomputers and Molecular Engineering, by K. Eric Drexler. 21 Oct. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The broad outlines of future technology will be set by the limits of physical law , if we can develop means for approaching those limits. Today, because we cannot directly manipulate the atomic building blocks of matter, we can make no more than a tiny fraction of the physical structures permitted by natural law. But advances in biotechnology and computational chemistry are opening a path to the development of molecular assemblers able to build objects to complex atomic specifications, removing this constraint and making possible dramatic advances in many fields. Among these advances will be nanocomputers with parts of molecular size. Mechanical nanocomputers--molecular Babbage machines--are amenable to design and analysis with available techniques: this technology promises sub-micron computers with gigahertz clock rates, nanowatt power dissipation, and RAM storage densities in the hundreds of millions of terabits per cubic centimeter
Box 13

V58.74 Architectural Innovations in the AT&T CRISP Microprocessor, by David Ditzel. 4-Mar-87

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The AT&T CRISP Microprocessor is a high performance C Machine. CRISP is a general purpose single chip 32-bit processor containing 172,163 transistors. The instructions-set architecture is that of a registerless 2 112 address memory-to-memory machine with a small number of instructions and addressing modes. The CRISP instruction-set is relatively independent of any particular implementation. High performance is achieved by pipelining, caches, an efficient instruction-set, and several new architectural techniques. A 128-byte on-chip stack cache provides automatic register allocation in hardware. Branch folding allows branch instructions to execute in zero time. This talk describes some of the more innovative aspects of CRISP's hardware architecture
Box 13

V58.75 Updating Deductive Databases in Logic, by David Scott Warren. 10 March 1987

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Prolog adds a notion of procedurality to Horn clause logic by using a particular proof procedure, SLD resolution. Global state changes are effected in Prolog as side-effects of this proof procedure, and as a result can only be understood in a very syntactic way. We have developed a theory, based on modal logic, that adds procedurality to Horn clause logic in a more semantic way. This theory is applied to the problem of updating deductive databases, and results in a declarative semantics for database transactions. We show connections to a logical notion of null values and database triggers. We show how some transactions that update database views can be understood as statements in a particularly simple and elegant dynamic logic. We describe the current state of an implementation of a prototype concurrent, distributed, deductive database with an update language based on this logic.
Box 13

V58.76 Software Standards and Hardware Advances: The Driving Forces for Open Systems, by Andy Bechtolsheim. 11 March 1987

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk discusses the advances in workstation technology which have occurred over the last five years and which have enabled us to build open systems interconnecting a wide range of computer equipment. We will cover the role of operating systems, networking, windowsystems, and graphics technology in the evolution of standards. Speculations will be made on possible developments in the next five years.
Box 13

V58.77 Servant: A New Shell for the MAC, by Andy Hertzfeld. 1 April 1987

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Servant is a new MAC shell replacing the FINDER, the most common shell for the MAC. The goal in designing Servant is to help evolve the MAC user interface into the late eighties. The program has four major elements: a second generation browser, intuitive multitasking support for rapid switching between applications, customizes to allow better allocation of resources, and user programmability at the shell level. This talk will demonstrate the program and discuss the design constraints and tradeoffs for Servant in the MAC environment.
Box 13

V58.78 An Introduction to the Probabilistic Analysis of Combinatorial Algorithms, by Richard Karp. 19 May 1987

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

In fields such as operations research, artificial intelligence and computer-aided design, algorithms are often used that perform well in practice even though there is no theoretical guarantee of their good performance. The simplex algorithm for linear programming is perhaps he most notable example of this phenomenon. It is a major challenge to algorithm designers to provide foundation for such quick-anddirty heuristic algorithms. One approach is through probabilistic analysis, in which one defines a probability distribution over the set of instances of a problem, and then endeavors to prove that some fast, simple algorithm performs well with a high probability. The speaker will discuss this approach, using examples related to set partitioning, bin packing and linear programming. He will then make an assessment of the strengths and weaknesses of probabilistic analysis as a method of validating quick-and-dirty algorithms
Box 14

V58.79 Why Computers Stop, by Jim Gray. 7 Oct. 1988

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Conventional computers stop because of hardware, software, environment, and people. Fault-tolerant computers stop for the conventional reasons, but the statistics are different. This talk discusses the statistics and approaches to fault tolerance.
Box 14

V58.80 The Users Speak Out--A Customer's Perspective of Computer Science, by William Miller (Moderator), Dr. Peter Castro, Kodak, Peter Dietz, General Electric, Michael Rose, US WEST, Dr. Bruce Johnson, and Arthur Anderson. Part 2. 10 Feb. 1988

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Computers and information systems are playing an increasing role in society, including fields that lie outside of science and engineering. In many of those fields our ability to implement new systems that provide increased capacity is limited by the state of the art in computer science and engineering. In many areas we know that we need a better understanding of the underlying science and technology (e.g, distributed systems) in order to design and implement leading edge systems. The panel session will provide an opportunity to raise issues of common interest to the academic community and our technologically oriented society as a whole
Box 14

V58.81 The AlphaSyntauri Synthesizer. a New Concept in a Musical Instrument Project, by Ellen V.B. Lapham and Robin J. Jigour. 14 Oct. 1981

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The alphaSyntauri instrument is a modular, programmable digital synthesizer system based on the Apple II micro computer. The driving philosophy behind its design is the use of digital technologies to achieve flexibility, reliability and expandability in a compact and cost effective package
Box 14

V58.82 An Extension to the Programming Language C for VLSI Layout, Ph.D. Oral of Kevin Karplus. Edited master. 28 July 1982

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

CHISEL is a systems language for building VLSI design tools. It can be used directly for designing chips, or for writing PLA generators, graphics editors, placement and routing systems, or full silicon compilers. CHISEL programs produce libraries of CIF (Caltech Intermediate Format) cells, together with header files providing essential extra information (cell names, sizes, connection ports, ... ). These headers provide all the information necessary for other CHISEL programs to use the cells. Utility programs exist for linking libraries, checking library consistency, plotting, .... The library structure gives strong support for hierarchical design styles. Named connection ports can be defined in any cell and copied with appropriate transformation when cells are placed or replicated. Simple but powerful primitives are provided for drawing wires and transistors. These form the basis for creating cells, and for interconnecting them. A new concept (fringes) is introduced to handle a variety of routing problems. An automatic wiring mechanism for non-permutation routing is provided using fringes.
Box 14

V58.83 Concurrent Testing at the Computer System Level, Ph.D. Oral of Masood Namjoo. 8 Nov. 1982

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This study establishes improved techniques for on-line detection of computer malfunctions. In contrast with the classical techniques for detection of errors at the circuit level, the techniques in this study use the information about the overall system behavior for the purpose of error detection. Such information is extracted from the program definition and is given to an auxiliary low-cost processor, called a watchdog processor. The watchdog processor uses this information in order to detect abnormal system behaviors. Several techniques are explained for monitoring system attributes such as execution flow and data access using a watchdog processor. The architecture of a general purpose watchdog processor called CERBERUS-16 is described. Techniques for concurrent testing at the microprogram execution level are developed. In these techniques the microprogrammed control unit is modified so that it is possible to detect a large percentage of sequencing and microcode errors.
Box 15

V58.84 Copyright. Patents and Trademarks, by Dr. Donald J. Chesarek. 2 Feb. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Copyright, patents and trademarks are formal mechanisms to protect intellectual and industrial property. The presentation will start with a review of basic concepts with a historical perspective and move to a discussion of current requirements based on easy to remember examples. Scenarios from the speaker's development experience will be used to illustrate how development problems can arise and some of the recovery techniques used to minimize rework. Understanding the basic concepts is fundamental to problem avoidance during the development cycle and legal problems after the product has shipped.
Box 15

V58.85 The New Generation Software Technology as Represented by the LISA System, by John Couch and John Love. 16 Feb. 1983

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The Apple LISA is an integrated software and hardware personal computer system designed for the non-programmer. LISA features an iconic (picture-based) programming environment which models closely the office environment LISA is intended to support. Technical problems plagued this talk by John Couch and John Love; the camera was unable to capture a truly readable image of the demonstration on the machine and during the early portion of the talk there were sound problems. Still, the talk itself and the questions afterward are of interest. The talk is followed by a professionally filmed demo provided by Apple[, "in which the graphics are excellent."]
Box 15

V58.86 Constraint Propagation Techniques for Theory-Driven Data Intemretation, Ph.D. Oral of Tom Dietterich. Edited master. 25 Oct. 1984

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk defines the task of THEORY-DRIVEN DATA INTERPRETATION (TDDI) and investigates the adequacy of constraint propagation techniques for performing it. Data interpretation is the process of applying a given theory T (possibly a partial theory) to interpret observed facts F and infer a set of initial conditions C such that from C and T one can infer F. Most existing data interpretation programs do not employ an explicit theory T, but rather use some algorithm that embodies T. Theory-driven data interpretation involves performing data interpretation by working from an explicit theory. The method of local propagation of constraints is investigated as a possible technique for implementing TDDI. A model task--forming theories of the file system commands of the UNIX operating system--is chosen for an empirical test of constraint propagation techniques. In the UNIX task, the "theories" take the form of programs, and theory-driven data interpretation involves "reverse execution" of these programs. To test the applicability of constraint techniques, a system named EG has been constructed for the "reverse execution" of computer programs. The UNIX task was analyzed to develop an evaluation suite of data evaluation problems, and these problems have been processed by EG. The results of this empirical evaluation demonstrate that constraint propagation techniques are adequate for the UNIX task, but only if the representation for theories is augmented to include invariant facts about the programs. In general, constraint propagation is adequate for TDDI only if the theories satisfy certain conditions: local invertibility, lack of constraint loops, and tractable inference over propagated values. There are some annoying "blinks" in this tape. However, they do not detract from the content of the lecture.
Box 15

V58.87 Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project, Produced and Directed by Anne Feibelman. Edited master. Feb. 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This twenty-minute film has been chosen for the United States Pavilion at the 1985 World's Fair at Tsukuba, Japan.
Box 15

V58.88 Multi-Level Logic Array Synthesis, Ph.D. Oral of Christopher Rowen. Edited master. 12 June 1985

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Automatic synthesis of VLSI logic circuits from function descriptions creates the opportunity for vastly reduced design cost, but presents formidable challenges. This silicon compilation requires a four step translation: 1) writing the function in terms of available logic components; 2) minimization of this logic representation; 3) mapping of logic into the target technology's circuit primitives; and 4 )selection of a detailed layout configuration. Existing methods based on programmable logic array (PLA), standard cell and gate array topologies attack the problem by placing severe restrictions on logic structure or circuit topology. A new method based on multi-level logic and Weinberger arrays integrates the entire compilation from functional description to layout generation, and provides greater flexibility in logic minimization, circuit topology and design goals. Multi-level logic and Weinberger arrays serve as ideal partners in synthesis of large circuit structures. Deeply nested logic expressions can save exponential area, power and circuit delay over sum-of-products form, and Weinberger arrays can directly implement the arbitrary interconnections of these complex logic functions.
Box 16

V58.89 Implementation of Resilient. Atomic Data Types, by Prof. Barbara Liskov. The first of two lectures given by Prof. Liskov on two successive days. Edited master. 29 Jan. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A major issue in many distributed programs is how to preserve the consistency of data in the presence of concurrency and hardware failures. A promising approach is to implement the programs in terms of data types with two properties: their objects are atomic (they provide serializability and totality for the activities that use them) and resilient (they survive hardware failures with high probability). The talk discusses issues that arise in implementing such atomic types and describes a particular linguistic mechanism provided in the Argus programming language. The approach is illustrated with several examples.
Box 16

V58.90 Specifications of Distributed Programs, by Prof. Barbara Liskov. The second of two lectures given by Prof. Liskov on two successive days. Edited master. 30 Jan. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Distributed programs frequently have performance requirements, such as high availability and concurrency. To accommodate these requirements, designers often change the functional behavior of the program from what it would have been in the absence of the requirements. The result is a program whose behavior is difficult to describe. This talk discusses how to give user-oriented, informal specifications of such programs. The specifications are given in a notation that distinguishes the expected and desirable effects from undesirable ones. This distinction makes the specifications easier to understand and is informative for both the users and the implementors of the program. To illustrate the approach, the talk will include example specifications of several distributed programs that have been described in the literature.
Box 16

V58.91 "Important Areas of Computer Science Research as Seen from an Industrial Viewpoint, by Dr. Elliot PinsonPinson, AT&T Bell Labs; Dr. Janusz Kowalik, Boeing Computer Services Co.; and Dr. George Dodd, G.M. Research Labs, Panelists of the 18th Annual Meeting of the Stanford Computer Forum" 4 Feb. 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This Panel Discussion was sponsored by the Eighteenth Annual Meeting of the Stanford Computer Forum. At the Seventeenth Annual Meeting in February, 1985, the industrial members of the Forum requested a greater interaction between the academic and industrial participants. Each participant opened with short remarks, followed by a panel discussion involving the audience and the panel's industrial participants. No constraints were placed on topics of discussion. The Forum member participants indicated those areas of computer science which they considered particularly important as research topics worldwide.
Box 16

V58.92 An Overview of Expert Systems, by Dr. Doug Lenat. Master. 14 May 1986

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Dr. Lenat describes ways that artificial intelligence, specifically expert systems, will codify the body of common sense knowledge so that it can be used for problem solving and knowledge acquisition. Lenat presented the lectures at a Hewlett-Packard workshop, courtesy of the Computer Forum. H-P has graciously made the tapes available to us for the Distinguished Lecture Series.
Box 16

V58.93 Common Sense Knowledge, by Dr. Doug Lenat. Edited Master. 14 May 1986

Physical Description: 1 videotape(s) (U-matic)
Box 17

V58.94 "The Transition of Computer Science Results to Business and Industry, by Dr. Richard ShueyShuey, Moderator; Dr. Thomas Buckholz, PG&E; Dr. William Spencer, Xerox Corporation; Dr. Gordon Bell, NSF, Panelists of the 19th Annual Meeting of the Stanford Computer Forum," 4 Feb. 1987

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Panel discussions continue as a part of the Forum's Annual Meetings, in response to industrial members requesting a greater interaction between the academic and industrial participants. The 1987 discussion addressed how well findings in academia and pure research could be fit to applications, and how to best accomplish that transition.
Box 17

V58.95 The Social History of the Personal Computer, Lee Felsenstein 1989 Jun 7

Physical Description: 2 videotape(s) (U-matic)
Box 17

V58 .96 Algorhithm Automation, Marc Brown 1988 Sep 28

Physical Description: 1 videotape(s) (U-matic)
Box 17

V58.97 Microprocessors in the Works from Motorola, Russell Stanphill 1989 May 24

Physical Description: 1 videotape(s) (VHS)
Box 17

V58.98 Efficient Debugging of Parallel Programs, Bart Miller 1988 Mar 30

Physical Description: 1 videotape(s) (VHS)
Box 18

V58.99 Legal Issues, Tom Vinje 7 December 1988

Physical Description: 1 videotape(s) (VHS)
Physical Description: 2 videotape(s) (U-matic)

Scope and Content Note

Mr. Vinje discusses legal issues concerning the protection (copyright protectability of computer software, covering areas such as look-feel and interface compatibility.
Box 18

V58.100 Viruses. Worms. and Other Distributed Computations, John Shoch and Jon Shapiro 9 November 1988

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The network virus widely reported in the press the last few days is one example of distributed computations. The speakers will discuss the history of such distributed computations and the specifics of the particular virus recently found and eradicated.
Box 18

V58.101 The NeXT Computer, Craig Hansen 19 October 1988

The NeXT Computer, Craig Hansen, 19 October 1988

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The NeXT Computer was introduced on October 12 after a three-year development effort. The design of the NeXT Computer reaches breakthrough advances in cost and performance and provides a feature-rich computing environment that uses high-quality graphics and sound along with a large library of software and literature. The talk will describe the hardware systems of the NeXT Computer, including the processor card, backplane and other expansion interfaces, the Megapixel Display, the 400 DPI Laser Printer, and the 256 Megabyte read/write/erasable optical drive.
Box 19

V58.102 KEYKOS: A Secure Object-Oriented Operating Systems, Norm Hardy 1989 Feb 8

Physical Description: 1 videotape(s) (VHS)
Physical Description: 2 videotape(s) (U-matic)
Box 19

V58 .103 RISC vs. CISC; Old vs. New, Nick Tredennick. 1989 Apr 5

Physical Description: 2 videotape(s) (U-matic) 1 videotape(s) (VHS)
Physical Description:

Scope and Content Note

Tredennick discusses trends in microprocessor architecture and development, including the current RISC fad. If RISCs succeed, it's on the strength of their implementations and in spite of the weak theoretical base. The RISC "breakthrough" is packaging the idea: changes to the architecture are a technology advance. The real issue is new architecture vs. old architecture. These ideas compete in a new field, with a weak experimental and theoretical base, developing under intense commercial pressure. The next microprocessor implementations will feature: register relabeling, distributed control, out -of-order execution, deep pipelining, write reservation queues, branch prediction, and multiple instruction issue.
Box 19

V58.104 Computer Science as Seen from Industry, panel discussion.

Physical Description: 1 videotape(s) (U-matic)
Physical Description:
Box 20

V58.105 The Architecture and Implementation of the Intel 486, John Crawford and Ken Shoemaker 5/3/1989

Physical Description: 2 videotape(s) (VHS)
Physical Description:

Scope and Content Note

Intel 486 processor provides a 2.5 to 3 times performance gain over the 386. In this talk the microarchitecture of the 486 will be described, the various functional blocks and their interconnecions identified, the pipelining within the execution and address calculation units explained, and the performance of the on-chip cache examined.
Box 20

V58.106 Intel's 960 Superscalar Architecture and Future Implementations, Steve McGready 5/10/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Mr. McGeady will present a brief overview of Intel's 960 architecture, and will present technical details of a new implementation of that architecture that allows the dispatch and execution of several instructions each clock cycle. The detailed micro-architecture of the core processor will be desribed, as well as the macro-architectural considerations that enable such an implementation.
Box 20

V58.107 Conflicts of RISC Architecture in Real-Time Control, Kim Rubin 5/17/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

RISC architecture trades away consistent performance for statistical performance. Large register sets, cache memory, and other features, are diametrically opposed to the short context switching time and low latency needed by real-time CRT) control systems. Specific requirements that result from how embedded computers are used in real-time systems should dictate critical underlying architectural design.
Box 20

V58.108 Architecture and Design of the i860 64-bit Microprocessor, Les Kohn 4/12/1989

Architecture and Design of the i860 64-bit Microprocessor, Les Kohn, 4/12/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Semiconductor technology and CAD tools have reached the point where it is possible to develop 1,000,000 transistor microprocessors. Conceived to take full advantage of this technology, the i860 CPU is a single chip, 64-bit RISC based microprocessor which uses parallel instruction execution and supercomputer architectural concepts. To achieve balanced performance, one third of the chip area is devoted to integer instructions, one third to instruction and data caches, and one third to floating point and 3-D graphics support. The talk will review architectural goals as well as implementation considerations.
Box 20

V58.109 The MARS Hardware Accelerator, Prathima Agrawal 4/26/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

This talk will focus on three aspects of MARS (Microprogrammable Accelerator for Rapid Simulations). First I will describe the architectural and design details of the machine. Second, I will present the logic simulation algorithms that have been implemented and are in production use today. Lastly, I will show the versatility of MARS for solving graph search problems by describing how MARS pipelines could be set up to accomplish other CAD and some of the non-CAD applications such as connected speech recognition.
Box 20

V58.110 Link-Time Code Modification, David W. Wall 2/1/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

The language system for DECWRL's Titan and MultiTitan rests on an optimizing back end called Mahler. Mahler does some of its optimizations at link time rather than compile time, because more global information is available then. Nevertheless, these optimizations are optional; the Mahler compiler produces correct code that can be linked in merely traditional manner. To achieve this, we have developed a terminology of link-time code modification. Link-time code modification is proving useful in performance analysis and architectural modeling as well.
Box 20

V58.111 Backplane Bus Design, Paul Borrill 1/18/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

This lecture is designed to give an overview of some of the theoretical and practical issues in backplane bus design. Topics covered will include transmission line effects, the bus drivng problem, synchronization domains, spatial and manufacturing skews, clock latency, split vs non-split transactions, arbitration methods, the byte-ordering problem, and protocols to support cache consistency along with general bus architecture models.
Box 20

V58.112 Toward a Gigabit National Network: Technology and Policy, David 1. Farber 1/25/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Over the past two years major forces have been at work shaping the future of communciations networks (in particular data networks). These forces have been both political and technological. In the political arena, the success of the NSFNet in the support of science has focused the attention of policy makers on where the nation must be in the year 2000. Several reports including one from the National Academy have outlined possible directions. In the technological arena the development of switching architectures capable of handling the variety of traffic anticipated and the rapid deployment of fibers have changed the complexion of the field. This talk will explore both areas and will also explore the impact of the gigabit world on the computer field.
Box 21

V58.113 Workstation for the '90s, Andreas Bechtolsheim 4/19/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Mr. Bechtolsheim discusses the goals, design, and implementation of SUN's new SP ARC workstation, pointing out a number of its features.
Box 21

V58.114 CAD Applications of Massive Parallel Computers, Alberto Sangiovanni-Vacentelli 3/1/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Massively parallel computers such as the Connection Machine developed by D. Hillis at MIT offer a new paradigm for CAD algorithms which allows the exploitation of fine as well as coarse grain parallelism. Two CPU intensive CAD problems have been attacked with the Connection Machine: process simulation and device simulation. As submicron geometries require that simulations be performed in three dimensions, the computational requirements increase dramatically. This talk will focus on the relationships between algorithms and massively parallel architecture.
Box 21

V58.115 VorTeX, a Multiple Representation System, Michael Harrison 3/8/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

There are two basic types of document processing systems, source-based (like TeX) and direct manipulation (a la MacWrite). VorTeX, for Visually Oriented TeX, is a multiple representation document processing system which has some features of both types of processors. In this talk, we will show a video tape of VorTeX, a companion system called IncTeX as well as a Postscript interpreter built at Berkeley. We will discuss some of the design, implementation, and performance issues.
Box 21

V58.116 Debug: a Multiprocessor Debugger for UNIX System V, Jonathan Shapiro 3/15/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Better integrated networking and the rapid proliferation of network-based applications is pushing us toward the need for multiprocess debugging across heterogeneous architectures. This talk describes Debug, a prototype developed internally at AT&T Bell Laboratories to explore the issues in this sort of debugging. Debug supports multiple source languages, and can debug both sides of a client-server application in which the server is running on one architecture and the client on another.
Box 21

V58.117 The Computer Forum Panel Discussion: Issues in Distributed Computing 2/15/1989

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

The costs of computing, memory, and mass storage continue to drop at a rapid rate. To date, that rate has been much more rapid than any reduction in communications costs. Added to these economic trends are advantages to the user and information system developer of local control and operation. The rise of micro "main frames," microcomputers, personal computers, and work stations illustrate the overall trend to distributed computing. At the same time, there will continue to be a need for central computing and data services. It is important that we understand the issues, and tradeoffs, in designing future distributed systems.
Box 21

V58.118 Betting, Bribery, and Bankruptcy: Organizing Computation Around Simulated Free Enterprise, Ted Kaehler, 11/116/88

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

Objects inside a computer exchange money for work and information in the course of solving a problem. When designing a monetary incentive system for computational entities, it helps to think of them as "bad" or "uncooperative.
Box 

V58.119 Vectors are History, Forest Baskett 11/30/1988

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

In this talk we give a few technology-independent methods for assessing the architectural effectiveness of existing vector processors and scalar processors. We show the progress of scalar processor architectures over the last ten years and show how they are now surpassing vector processor architectures in terms of architectural effectiveness on vectorizable as well as non-vectorizable scientific and engineering applications. We illustrate this current state by comparing the actual performance of our Silicon Graphics Iris 4D1240 scalar multiprocessor with the performance of some other vector multiprocessors. In closing we illustrate the critical importance of block method algorithms for achieving the performance potential of most current scalar processors.
Box 21

V58.120 Compiling for a Memory Hierarchy, Rob Schreiber 10/12/1988

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

In this talk we explore the use of a two-level memory hierarchy having a large, shared, relatively slow level and a small, distributed, relatively fast level. This scheme has the potential for exploiting the advantages of both shared and distributed memory systems. But to exploit it, programs must make most of their references to the fast, distributed memory. Block algorithms are a powerful tool for exploiting this memory architecture. In this talk we present a general compiler technique for automatically "blocking" algorithms. The problem is reduced to a simple geometric computation.
Box 21

V58.121 XTPIPE Overview, Greg Chesson 10/26/1988

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

XTP is a new network protocol under development for the last few years which is being incorporated into new ANSI and U.S. Government network standards. It is designed to perform network gateway functions as well as end-to-end transport functions. It provides for reliable multicasting, transactions, selective retransmission, parameterized addressing, delivery confirmation, and reliable datagrams, as well as traditional bulk and stream services. These services are implemented with simple algorithms that can be rendered in VLSI circuits. The Protocol Engine is a chip set designed to operate XTP in real time.
 

Additional material 1980-1987

Box 22

58.122 Software Controlled Caches in the VMP Multiprocessor, David Cheriton 10/8/86

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

VMP is an experimental multiprocessor that follows the familiar basic design of multiple processors, each with a cache, connected by a shared bus to global memory. Each processor has a synchronous, virtually addressed, single master connection to its cache, providing very high memory bandwidth. An unusually large cache page size and fast sequential memory copy hardware make it feasible for cache misses to be handled in software, analogously to the handling of virtual memory page faults. Hardware support for cache consistency is limited to a simple state machine that monitors the bus and interrupts the processor when a cache consistency action is required. In this talk, we describe how the VMP design provides the high memory bandwidth required by modern high-performance processors with a minimum of hardware complexity and cost. We also describe simple solutions to the consistency problems associated with virtually addressed caches. Simulation results indicate that the design achieves good performance providing data contention is not excessive.
Box 23

58.123 Xerox Personal Research Computers, Edward McCreight 10/22/80

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Between 1973 and 1980, three microprogrammable 16-bit personal computers were developed at the Xerox Palo Alto Research Center. Divided into control, arithmetic, memory, macroinstruction-decoding, and I/O sections, the architectures of these computers are explained, compared, and contrasted.
Box 23

58.124 A High Level Approach to Computer Document Production, Brian K. Reid 12/3/80

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

People spend an outrageous amount of time using computers to format their documents. There are two possible explanations for this: the great importance of the task, or the inappropriateness of the standard tools. SCRIBE is a stab at building more appropriate tools: it was not designed to produce beautiful output, but to accept beautiful input. The SCRIBE system was designed around the principle that all good software should deal with the problem domain at as high a level of abstraction as possible, and around the underlying assumption that although users don't want to become experts, everybody wants to tinker. To these ends, SCRIBE is a high-level non-procedural table-drive system, with an extensive facility for making modifications to its behavior without needing to understand the programming issues behind them.
Box 23

58.125 The Architecture and Implementation of the Intel 432, Justin Rattner 2/25/81

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The Intel 432 is a recently introduced microcomputer system designed to support very large scale software (VLSS) systems. In order to better serve software-intensive microcomputer applications, the 432 departs significantly from existing mainframe mini- and microcomputer designs. Examples of this departure include the 432's high-level, object-based architecture and its flexible multiprocessor system organization. This talk will briefly review the motivations and concepts that underlie the 432 and will then take up its object-based architecture and VLSI implementation.
Box 23

58.126 High Speed Operation of Local Area Networks, John Limb 2/13/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Local Communication Networks need large transmission capacities to handle all traffic in the evolving business environment (e.g., data, voice, facsimile and video). He first compares the high speed performance of two bi-directional CSMA protocols; one very popular and the other very efficient. A unidirectional system using a passive medium (called FASNET) will be described and compared with the other two systems.
Box 23

58.127 High Speed Operation of Local Area Networks, John Limb 2/13/82

Physical Description: 1 videotape(s) (VHS)
Box 23

58.128 A Retrospective on the Dorado, a High-Performance Personal Computer, Kenneth Pier 4/14/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Between 1975 and 1980, a high performance personal computer caller Dorado was conceived, designed, implemented and produced within Xerox PARC. The Dorado supports sophisticated single user programming environments in several high level languages including Lisp, Smalltalk, and Mesa. High bandwidth input/output devices, including color displays and networks, are supported as well. This is a retrospective on the properties, successes, and failures of the Dorado project, with an emphasis on the architecture and implementation of the machine. A review of the architecture opens the talk, followed by comments on specific sections introduced in the review. Finally, there is an overall evaluation and some speculation on the past and the future of Dorado-like architectures.
Box 23

58.129 Locus Developments, Gerald J. Popek 2/10/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Locus is a distributed operating system, operational at UCLA, that provides a high degree of network transparency, both for data and processes. It provides a system call interface compatible with Unix, and permits a wide degree of resource allocation decisions, including support for diskless machines, replicated storage, etc. The Locus architecture will be briefly reviewed, and then recent developments and ongoing work will be outlined. These include experience with and limits to transparency, a low cost approximation to nested transactions, approaches to heterogeneity, and the boundary between transparence and local autonomy.
Box 23

58.130 The Optical Mouse and an Architectural Methodology for Smart Digital Sensors, Richard Lyon 10/6/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

A mouse is a pointing device used with interactive computer systems. Electromechanical mice were first developed in the 60s at SRI. The mechanical mouse was redesigned at Xerox and has been in popular use there for several years. To overcome a number of observed problems with the mechanical mouse, work was started in the 1980s on an optical replacement for the electromechanical mouse. This talk describes the design and tenting of the prototype optical mouse.
Box 23

58.131 The SUN Workstation, Andreas Bechtolsheim 11/9/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The SUN Workstation is a desk-top personal computer system, combining mainframe processing power, Ethernet networking, and a high-speed raster graphics display with 1024 by 800 pixel resolution. Designed around a 32-bit microprocessor, the Motorola 68010, the SUN Workstation provides approximately 1 MIPS processor performance, multiprocess memory management, virtual memory capabilities, and a physical memory of 1 MByte or more. One of the software environments now being ported to the SUN Workstation is the Berkeley 4.2bsd UNIX operating system. This provides the same software environment on the SUN Workstation now available for the Digital Equipment VAX computer family, together with the ability to run SUN Workstations without local mass storage from remote fileservers. The talk focuses on the system architecture and the implementation of the SUN Workstation. Statistics and history of the project are discussed, and a demonstration is included.
Box 23

58.132 Database and Distributed Systems Research, Robin Williams 1/11/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

There are several database projects in IBM Research, San Jose, including work on distributed database systems, on high availability, on high performance and on databases for engineering design. Descriptions will be give of these activities with a concentration on R*, a distributed database system, which is now a working prototype, running on three IBM 4341 machines. R* is a confederation of co-operating sites each supporting the relational model of data. To achieve maximum site autonomy, statements in the language SQL are compiled, data objects are named and catalogued, and deadlock detention and recovery are all handled in a distributed manner. The R* architecture is presented.
Box 23

58.133 A High-Performance Implementation of Smalltalk-80™, Allan M. Schiffman 1/12/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The programming language and environment Smalltalk-80 has been implemented on a variety of processors. The talk describes yet another implementation on a commercially-available microprocessor system (the SUN workstation). The details of the implementation are of interest since significant performance improvements have been obtained without sacrificing the existing semantics of Smalltalk. A preliminary version of the system has been benchmarked. In several significant departures from the implementation suggestions of the Smalltalk virtual-machine designers, our implementation has achieved substantial performance improvements. The overall design strategy has been summed up – "The implementer must cheat, but not get caught." We can group the majority of the departures under the headings of "dynamic translation" and "optimized contexts." These techniques could be applied to implementations of languages other than Smalltalk.
Box 24

58.134 Packet Network Interconnection Standards, Vint Cerf 2/1/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

This talk provides an overview of the history of packet switching. Relevant packet-switching technology will be reviewed, including packet radio, packet satellite, local networks, and terrestrial packet networks such as the ARPANET. We will then review network interconnection techniques leading to the present Department of Defense packet protocol standards, collectively referred to as the ARPA internet protocols.
Box 24

58.135 The Cedar Systems: Perspectives and Projections, Roy Levin 2/8/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

The Cedar system is an experimental programming environment for the "D" machines (Dorado, Dolphin, Dandelion). The language and system have been heavily influenced by other local programming systems, notably Mesa, Star, Smalltalk, and Interlisp. Consequently, it emphasizes integration of programming and non-programming tools in an environment tuned to rapid production of experimental software. This talk includes a brief review of the origins and goals of the Cedar project, a description of the present status, and a sketch of the probable development over the next couple of years. In the process, the talk focuses on the features of the system that are unique or unusual in Algol-tradition programming environments.
Box 24

58.136 Evidential Reasoning in Perception, John D. Lowrance 2/22/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

One common feature of my perceptual system is that it must reason based upon the evidential information that is provided by its sensors. Expert-system technology would seem to provide a mechanism for reasoning from evidence, yet there is very little agreement on how this should be done. Here we present our current understanding of this problem and some partial solutions. We begin by characterizing evidence as a body of information that is uncertain, incomplete, and sometimes inaccurate. Based on this characterization, we conclude that evidential reasoning requires both a method for pooling multiple bodies of evidence to arrive at a consensus opinion and some means of drawing the appropriate conclusions from that opinion. We contrast our approach, based on a relatively new mathematical theory of evidence, with those approaches based on Bayesian probability models. We believe that our approach has some significant advantages, particularly its ability to represent and reason from bounded ignorance. The specific problem addressed is that of multi-modal perception, particularly the integration of sensory information with prior world knowledge, in order to assess relevant aspects of the current environment. A prototype system has been constructed that demonstrates the utility of these techniques.
Box 24

58.137 Rediflow: A Multiprocessing Architecture Combining Reduction and Data Flow, Robert M. Keller 5/10/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

Functional languages, i.e., languages based upon composition of functions rather than upon sequences of commands, have been conjectured to be useful in making the power of multiple processors more accessible. We demonstrate the utility of the functional approach within application domains such as logic programming and database updating. Two principle types of evaluation model, reduction and dataflow, have been proposed for highly-concurrent evaluation of functional languages. After pointing out the relative advantages of each type, we indicate how the two can profitably be combined. We then describe a proposal for Rediflow, a multiprocessor architecture based on the combined model, which also features system-managed distribution of tasks to processing elements.
Box 24

58.138 How Fast Can a Single Instruction Counter Machine Execute?, Tilak Agerwala 5/24/83

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

After a brief overview of the limitations of conventional pipelined processors and vector processors, this talk describes recent work at IMB Research on very high speed scalar machines. Our goal was the paper design of a machine that could sustain a speed of roughly 100 million instructions per second on general purpose scientific/engineering applications, with peak floating point performance of roughly 100 million floating point operations per second. The talk will identify the extent to which these goals were achieved and the combination of compiler, machine organization, and architecture approaches used. The chosen design point represents a new partitioning of work between the complier and the hardware. The talk will discuss the performance achievable by internally concurrent, "single instruction counter machines" and will compare our approach to others such as long instruction work computers and data flow.
Box 24

58.139 Software Army on the March – Project Strategies and Tactics, John R. Mashey 9/28/83

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

This talk describes the work of an army building roads as an analogy to software projects. Decision-making processes are examined from two different viewpoints. The first is the formal game theory viewpoint – making decisions in a non-deterministic, multi-state, non-zero-sum game played with incomplete information. The second is the "army" model. From this viewpoint are discussed such issues as: fighting the right war in the right place and choosing good routes to reach the goals; the need for scouts on motorcycles ("fast prototypers"); how campaigns differ, and thus affect choice of troops; special precautions for earthquake country; getting natives to buy and drive your trucks, instead of shooting your tires out as you drive through their villages. There exist many similarities in the decision processes of formal game analysis, military planning, and project management. The talk uses the first and second to help shed light on the third.
Box 24

58.140 The Human Factor: Designing Computer Systems for People, Dick Rubinstein 6/5/84

Physical Description: 1 videotape(s) (VHS)

Scope and Content Note

What principles of human factors – and what just plain good ideas – may be applied to the design of quality user interfaces? Dr. Rubinstein advances a number of principles and ideas that can make computer systems better for all users, including examples from the VAXstation user interface.
Box 24

58.141 Belief, Awareness and Limited Reasoning, Joe Halpern 10/1/85

Physical Description: 1 videotape(s) (VHS)
Box 24

58.142 Perceptual Organization and the Representation of Natural Form, Alex P. Pentland 11/19/85

Physical Description: 1 videotape(s) (VHS)
Box 24

58.143 A Knowledge-Based Approach to High-Level Program Optimization, Allen Goldberg 12/10/85

Physical Description: 1 videotape(s) (VHS)
Box 24

58.144 Technology Trends of CMOS VLSI, Yoshio Nishi 1/22/86

Physical Description: 1 videotape(s) (VHS)
Box 24

58.145 Lattices and Algorithms, Laszlo Lovasz 2/18/86

Physical Description: 1 videotape(s) (VHS)
Box 24

58.146 An Introduction to the Probabilistic Analysis of Combinatorial Algorithms, Richard Karp 5/19/87

Physical Description: 1 videotape(s) (VHS)
Box 24

58.147 Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations, Marsha J. Berger 7/7/82

Physical Description: 1 videotape(s) (U-matic)

Scope and Content Note

In many time dependent simulations, the solution on most of the domain will be fairly smooth, with discontinuities or highly oscillatory phenomena occurring over only a small fraction of the domain. In problems such as these, a mesh refinement approach can be the most efficient, and often the only practical solution method. Refined grids with smaller and smaller mesh spacing are placed only where they are needed. Since we are solving a time dependent problem, the regions needing refinement will change, and therefore our grids must adapt with time as well. This thesis presents a method based on the idea of multiple, component grids for the solution of hyperbolic partial differential equations (pde) using explicit finite difference techniques. Using Richardson-type estimates of the local truncation error, refined grids are created or existing ones removed to attain a given accuracy for a minimum amount of work. In addition, this approach is recursive in that fine grids can themselves contain even finer subgrids. Those grids with finer mesh width in space will also have a smaller mesh width in time, making this a mesh refinement algorithm in time and space.
Box 24

58.148 Pamphlet to accompany talk by A. R. Stivers, Voltage Contrast: A Powerful Tool for VLSI Circuit Diagnosis

 

Accession ARCH-2001-332 Duplicates

Box 1

Numbers 1-12, 14, 18-19

Box 2

Numbers 25, 28, 32-34, 37-40, 42-46

Box 3

Numbers 47-52, 54-61

Box 4

Numbers 62-72 and Lectures of special interest A and C – E

Box 5

Lectures of special interest F – I