(1826 KB) Pobierz
Resolution can be as high as 400 X 800 dpi, with gray scales ranging from 16-128 values. These
are medium- to high-throughput devices, producing complex images in about a minute. On-board
computing facilities, such as RISC processors and fast hard disk storage mechanisms, contribute to
rapid drawing and processing speeds. Expansion slots accommodate interface cards for LANs or
parallel ports.
InkJet Plotter. InkJet plotters and printers fire tiny ink droplets at paper or a similar medium
from minute nozzles in the printing head. Heat generated by a separate heating element almost
instantaneously vaporizes the ink. The resulting bubble generates a pressure wave that ejects an ink
droplet from the nozzle. Once the pressure pulse passes, ink vapor condenses and the negative
pressure produced as the bubble contracts draws fresh ink into the nozzle. These plotters do not
require special paper and can also be used for preliminary drafts. InkJet plotters are available both
as desktop units for 8.5 X 11-in. graphics and in wide format for engineering CAD drawings. Typical
full-color resolution is 360 dpi, with black-and-white resolution rising to 700 X 720 dpi. These
devices handle both roll-feed and cut sheet media in widths ranging from 8.5-36 in. Also, ink capacity
in recently developed plotters has increased, allowing these devices to handle large rolls of paper
without depleting any one ink color. InkJet plotters are very user-friendly, often including sensors for
the ink supply and ink flow that warn users of an empty cartridge or of ink stoppage, allowing
replacement without losing a print. Other sensors eliminate printing voids and unwanted marks caused
by bubbles in the ink lines. Special print modes typically handle high-resolution printing by repeatedly
going over image areas to smooth image lines. In addition, inkjet plotters typically contain 6-64
megabytes of image memory and options such as hard drives, an Ethernet interface for networking,
and built-in Postscript interpreters for faster processing. InkJet plotters and printers are increasingly
dominating other output technologies, such as pen plotters, in the design laboratory.
Laser Plotter. Laser plotters produce fairly high-quality hard copies in a shorter period of time
than pen plotters. A laser housed within the plotter projects rasterized image data in the form of light
onto a photostatic drum. As the drum rotates further about its axis, it is dusted with an electrically
charged powder known as toner. The toner adheres to the drum wherever the drum has been charged
by the laser light. The paper is brought into contact with the drum and the toner is released onto the
paper, where it is fixed by a heat source close to the exit point. Laser plotters can quickly produce
images in black and white or in color, and resolution is high.
Software is the collection of executable computer programs including operating systems, languages,
and application programs. All of the hardware described above can do nothing without software to
support it. In its broadest definition, software is a group of stored commands, sometimes known as
a program, that provides an interface between the binary code of the CPU and the thought processes
of the user. The commands provide the CPU with the information necessary to drive graphical
displays and other output devices, to establish links between input devices and the CPU. The com-
mands also define paths that enable other command sequences to operate. Software operates at all
levels of computer function. Operating systems are a type of software that provides a platform upon
which other programs may run. Likewise, individual programs often provide a platform for the
operation of subroutines, which are smaller programs dedicated to the performance of specific tasks
within the context of the larger program.
13.7.1 Operating Systems
Operating systems have developed over the past 50 years for two main purposes. First, operating
systems attempt to schedule computational activities to ensure good performance of the computing
system. Second, they provide a convenient environment for the development and execution of pro-
grams. An operating system may function as a single program or as a collection of programs that
interact with each other in a variety of ways.
An operating system has four major components: process management, memory management,
input/output operations, and file management. The operating system schedules and performs input/
output, allocates resources and memory space and provides monitoring and security functions. It
governs the execution and operation of various system programs and applications such as compilers,
databases, and CAD software.
Operating systems that serve several users simultaneously (e.g., UNIX) are more complicated than
those serving only a single user (e.g., MS-DOS, Macintosh Operating System). The two main themes
in operating systems for multiple users are multiprogramming and multitasking.
Multiprogramming provides for the interleaved execution of two or more computer programs
(jobs) by a single processor. In multiprogramming, while the current job is waiting for the input/
output (I/O) to complete, the CPU is simply switched to execute another job. When that job is
waiting for I/O to complete, the CPU is switched to another job, and so on. Eventually, the first job
completes its I/O functions and is serviced by the CPU again. As long as there is some job to
complete, the CPU remains active. Holding multiple jobs in memory at one time requires special
hardware to protect each job, some form of memory management, and CPU scheduling. Multipro-
gramming increases CPU use and decreases the total time needed to execute the jobs, resulting in
greater throughput.
The techniques that use multiprogramming to handle multiple interactive jobs are referred to as
multitasking or time-sharing. Multitasking or time-sharing is a logical extension of multiprogramming
for situations where an interactive mode is essential. The processor's time is shared among multiple
users. Time-sharing was developed in the 1960s, when most computers were large, costly mainframes.
The requirement for an interactive computing facility could not be met by the use of a dedicated
computer. An interactive system is used when a short response time is required. Time-sharing op-
erating systems are very sophisticated, requiring extra disk management facilities and an on-line file
system having protective mechanisms as well.
The following sections discuss the two most widely used operating systems for CAD applications,
UNIX and Windows NT. It should be noted that both of these operating systems can run on the same
hardware architecture.
The first version of UNIX was developed in 1969 by Ken Thompson and Dennis Ritchie of the
Research Group of Bell Laboratories to run on a PDP-7 minicomputer. The first two versions of
UNIX were created using assembly language, while the third version was written using the C pro-
gramming language. As UNIX evolved, it became widely used at universities, research and govern-
ment institutions, and eventually in the commercial world. UNIX quickly became the most portable
of operating systems, operable on almost all general-purpose computers. It runs on personal com-
puters, workstations, minicomputers, mainframes, and supercomputers. UNIX has become the pre-
ferred program-development platform for many applications, such as graphics, networking, and
databases. A proliferation of new versions of UNIX has led to a strong demand for UNIX standards.
Most existing versions can be traced back to one of two sources: AT&T System V or 4.3 BSD
(Berkeley UNIX) from the University of California, Berkeley (one of the most influential versions).
UNIX was designed to be a time-sharing, multi-user operating system. UNIX supports multiple
processes (multiprogramming). A process can easily create new processes with the fork system call.
Processes can communicate with pipes or sockets. CPU scheduling is a simple priority algorithm.
Memory management is a variable-region algorithm with swapping supported by paging. The file
system is a multilevel tree that allows users to create their own subdirectories. In UNIX, I/O devices
such as printers, tape drives, keyboards, and terminal screens are all treated as ordinary files (file
metaphor) by both programmers and users. This simplifies many routine tasks and is a key component
in extensibility of the systems. Certifiable security that protect users' data and network support are
also two important features.
UNIX consists of two separable parts: the kernel and the system programs. The kernel is the
collection of software that provides the basic capabilities of the operating system. In UNIX, the
kernel provides the file system, CPU scheduling, memory management, and other operating system
functions (I/O devices, signals) through system calls. System calls can be grouped into three cate-
gories: file manipulation, process control, and information manipulation. Systems programs use the
kernel-supported system calls to provide useful functions, such as compilation and file manipulation.
Programs, both system and user-written, are normally executed by a command interpreter. The com-
mand interpreter is a user process called a shell. Users can write their own shell. There are, however,
several shells in general use. The Bourne shell, written by Steve Bourne, is the most widely available.
The C shell, mostly by Bill Joy, is the most popular on BSD systems. The Korn Shell, by David
Korn, has also become quite popular in recent years.
Windows NT
The development effort for the new high-end operating system in the Microsoft Windows family,
Windows NT (New Technology), has been led by David Culter since 1988. Market requirements and
sound design characteristics shaped the Windows NT development. The architects of "NT," as it is
popularly known, capitalized on the strengths of UNIX while avoiding its pitfalls. Windows NT and
UNIX share striking similarities. There are also marked differences between the two systems. UNIX
was designed for host-based terminal computing (multi-user) in 1969, while Windows NT was de-
signed for client/server distributed computing in 1990. The users on single-user general-purpose
workstations (clients) can connect to multi-user general-purpose servers with the processing load
shared between them. There are two Windows NT-based operating systems: Windows NT Server and
Windows NT Workstation. The Windows NT Workstation is simply a scaled-down version of Win-
dows NT Server in terms of hardware and software. Windows NT is a microkernel-based operating
system. The operating system runs in privileged processor mode (kernel mode) and has access to
system data and hardware. Applications run on a non-privileged processor mode (user mode) and
have limited access to system data and hardware through a set of digitally controlled application
programming interfaces (APIs). Windows NT also supports both single-processor and symmetric
multiprocessing (SMP) operations. Multiprocessing refers to computers with more than one processor.
A multiprocessing computer is able to execute multiple threads simultaneously, one for each processor
in the computer. In SMP, any processor can run any type of thread. The processors communicate
with each other through shared memory. SMP provides better load-balancing and fault-tolerance. The
Win32 subsystem is the most critical of the Windows NT environment subsystems. It provides the
graphical user interface and controls all user input and application output.
Windows NT is a fully 32-bit operating system with all 32-bit device drivers, paving the way for
future development. It makes administration easy by providing more flexible built-in utilities and
removes diagnostic tools. Windows NT Workstation provides full crash protection to maximize up-
time and reduce support costs. Windows NT is a complete operating system with fully integrated
networking, including built-in support for multiple network protocols. Security is pervasive in Win-
dows NT to protect system files from error and tampering. The NT file system (NTFS) provides
security for multiple users on a machine.
Windows NT, like UNIX, is a portable operating system. It runs on many different hardware
platforms and supports a multitude of peripheral devices. It integrates preemptive multitasking for
both 16- and 32-bit applications into the operating system, so it transparently shares the CPUs among
the running applications. More usable memory is available due to advanced memory features of
Windows NT. There are more than 1400 32-bit applications available for Windows NT today, in-
cluding all major CAD and FEA software applications.
Hardware requirements for the Windows NT operating system fall into three main categories:
processor, memory, and disk space. In general, Windows NT Server requires more in each of the
three categories than does its sister operating system, the Windows NT Workstation. The minimum
processor requirements are a 32-bit x86-based microprocessor (Intel 80386/25 or higher), Intel Pen-
tium, Apple Power-PC, or other supported RISC-based processor, such as the MIPS R4000 or Digital
Alpha AXP. The minimum memory requirement is 16 MB. The minimum disk space requirements
for just the operating system are in the 100-MB range. NT Workstation requires 75 MB for x86 and
97 MB for RISC. For the NT Server, 90 MB for x86 and 110 MB for RISC are required. There is
no need to add additional disk space for any application that is run on the NT operating system.
13.7.2 Graphical User Interface (GUI) and the X Window System
DOS, UNIX, and other command-line operating systems have long been criticized for the complexity
of their user interface. For this reason, GUI is one of the most important and exciting developments
of this decade. The emergence of GUI revolutionized the methods of man-machine interaction used
in the modern computer. GUIs are available for almost every type of computer and operating system
on the market. A GUI is distinguished by its appearance and by the way an operator's actions and
input options are handled. There are over a dozen GUIs. They may look slightly different, but they
all share certain basic similarities. These include the following: a pointing device (mouse or digitizer),
a bit-mapped display, windows, on-screen menus, icons, dialog boxes, buttons, sliders, check boxes,
and an object-action paradigm. Simplicity, ease of use, and enhanced productivity are all benefits of
a GUI. GUIs have fast become important features of CAD software.
Graphical user interface systems were first envisioned by Vannevar Bush in a 1945 journal article.
Xerox was researching graphical user interface tools at the Palo Alto Research Center throughout the
1970s. By 1983, every major workstation vendor had a proprietary window system. It was not until
1984, however, when Apple introduced the Macintosh computer, that a truly robust window environ-
ment reached the average consumer. In 1984, a project called Athena at MIT gave rise to the X
Window system. Athena investigated the use of networked graphics workstations as a teaching aid
for students in various disciplines. The research showed that people could learn to use applications
with a GUI much more quickly than by learning commands.
The X Window system is a non-vendor-specific window system. It was specifically developed to
provide a common window system across networks connecting machines from different vendors.
Typically, the communication is via Transmission Control Protocal/Internet Protocal (TCP/IP) over
an Ethernet network. The X Window system (X-Windows or X) is not a GUI. It is a portable, network-
transparent window system that acts as a foundation upon which to build GUIs (such as AT&T's
OpenLook, OSF/Motif, and DEC Windows). The X Window system provides a standard means of
communicating between dissimilar machines on a network and can be viewed in a window. The
unique benefit provided by a window system is the ability to have multiple views showing different
processes on different networks. Since the X Window system is in the public domain and not specific
to any platform or operating system, it has become the de facto window system in heterogeneous
environments from PCs to mainframes.
Unfortunately, a window environment does not come without a price. Extra layers of software
separate the user and the operating system, such as window system, GUI, and an Application Pro-
gramming Interface (ToolKit) in a UNIX operating environment. GUIs also place extra demands on
hardware. All visualization workstations require more powerful processing capabilities (> 6 MIPS),
large CPU memory and disk subsystems, built-in network Input/Output (I/O) with typically Ethernet
high-speed internal bus structures (> 32 MB/sec)—high-resolution monitors (> 1024 X 768), more
colors (> 256), and so on.
For PCs, both operating systems and GUIs are in a tremendous state of flux. Microsoft Windows,
Windows NT, and Windows 95 are expected to dominate the market, followed by the Macintosh.
For workstations, the OSF/Motif interface on an X-Windows system seems to have the best potential
to become an industry-wide graphical user interface standard.
13.7.3 Computer Languages
The computer must be able to understand the commands it is given in order to perform desired tasks
at hand. The binary code used by the computer circuitry is very easy for the computer to understand,
but can be tedious and almost indecipherable to the human programmer. Languages for computer
programming have developed to facilitate the programmer's job. Languages are often categorized as
low- or high-level languages.
Low-Level Languages
The term low-level refers to languages that are easy for the computer to understand. These languages
are often specific to a particular type of computer, so that programs created on one type of computer
must be modified to run on another type. Machine language (ML) and assembly language (AL) are
both considered low-level languages.
Machine language is the binary code that the computer understands. ML uses an operator com-
mand coupled with one or more operands. The operator command is the binary code for a specific
function, such as addition. The numbers to be added, in this example, are operands. Operators are
also binary codes, arbitrary with respect to the machine used. For a hypothetical computer, all operator
codes are established to be eight digits, with the operator command appearing after the two operands.
If the operator code for addition then were 01100110, the binary (base 2) representation of the two
numbers added would be followed by the code for addition. A command line to perform the addition
of 21 and 14 would then be written as follows:
The two operands are written in their 8 bit binary forms (2I 1 0 as 0001010I 2 and H 1 0 as 0000111O 2 )
and are followed by the operator command (01100110 for addition). The binary nature of this lan-
guage makes programming difficult and error-correction even more so.
AL operates in a similar manner to ML but substitutes words for machine codes. The program
is written using these one-to-one relationships between words and binary codes and separately as-
sembled through software into binary sequences. Both ML and AL are time-intensive for the pro-
grammer and, because of the differences in logic circuitry between types of computers, the languages
are specific to the computer being used. High-level languages address the problems presented by
these low-level languages in various ways.
High-Level Languages (HLLs)
High-level languages give the programmer the ability to bypass much of the tediousness of program-
ming involved in low-level languages. Often many ML commands will be combined within one HLL
statement. The programming statements in HLL are converted to ML using a compiler. The compiler
uses a low-level language to translate the HLL commands into ML and check for errors. The net
gain in terms of programming time and accuracy far outweighs the extra time required to compile
the code. Because of their programming advantages, HLLs are far more popular and widely used
than low-level languages. The following commonly used programming languages are described be-
• Pascal
• C
• C+ +
FORTRAN (FORmula TRANslation). Developed at IBM between 1954 and 1957 to perform
complex calculations, this language employs a hierarchical structure similar to that used by mathe-
maticians to perform operations. The programmer uses formulas and operations in the order that
would be used to perform the calculation manually. This makes the language very easy to use.
FORTRAN can perform simple as well as complex calculations. FORTRAN is used primarily for
scientific or engineering applications. CFP95 Suite, a software benchmarking product by Standard
Performance Evaluation Corp. (SPEC) is written in FORTRAN. It contains 10 CPU-intensive floating
point benchmarks.
The programming field in FORTRAN is composed of 80 columns, arranged in groups relating to
a programming function. The label or statement number occupies columns 1-5. If a statement extends
beyond the statement field, a continuation symbol is entered in column 6 of the next line, allowing
the statement to continue on that line. The programming statements in FORTRAN are entered in
columns 7-72. The maximum number of lines in a FORTRAN statement is 20. Columns 73-80 are
used for identification purposes. Information in these columns is ignored by the compiler, as are any
statements with a C entered in column 1.
Despite its abilities, there are several inherent disadvantages to FORTRAN. Text is difficult to
read, write, and manipulate. Commands for program flow are complicated and a subroutine cannot
go back to itself to perform the same function.
Pascal Pascal is a programming language with many different applications. It was developed
by Niklaus Wirth in Switzerland during the early 1970s and named after the French mathematician
Blaise Pascal. Pascal can be used in programs relating to mathematical calculations, file processing
and manipulation, and other general-purpose applications.
A program written in Pascal has three main sections: the program name, the variable declaration,
and the body of the program. The program name is typically the word PROGRAM followed by its
title. The variable declaration includes defining the names and types of variables to be used. Pascal
can use various types of data and the user can also define new data types, depending on the re-
quirements for the program. Defined data types used in Pascal include strings, arrays, sets, records,
files, and pointers. Strings consist of collections of characters to be treated as a single unit. Arrays
are sequential tables of data. Sets define a data set collected with regard to sequence. Records are
mixed data types organized into a hierarchical structure. Files refer to collections of records outside
of the program itself, and pointers provide flexible referencing to data. The body of the program uses
commands to execute the desired functions. The commands in Pascal are based on English and are
arranged in terms of separate procedures and functions, both of which must have a defined beginning
and end. A function can be used to execute an equation and a procedure is used to perform sets of
equations in a defined order. Variables can be either "global" or "local," depending on whether they
are to be used throughout the program or within a particular procedure. Pascal is somewhat similar
to FORTRAN in its logical operation, except that Pascal uses symbolic operators while FORTRAN
operates using commands. The structure of Pascal allows it to be applicable to areas other than
mathematical computation.
BASIC (Beginners All-Purpose Symbolic Interactive Code). BASIC was developed at Dart-
mouth College by John Kemeny and Thomas Kurtz in the mid-1960s. BASIC uses mathematical
programming techniques similar to FORTRAN and the simplified format and data manipulation
capabilities similar to Pascal. As in FORTRAN, BASIC programs are written using line numbers to
facilitate program organization and flow. Because of its simplicity, BASIC is an ideal language for
the beginning programmer. BASIC runs in either direct or programming modes. In the direct mode,
the program allows the user to perform a simple command directly, yielding an instantaneous result.
The programming mode is distinguished by the use of line numbers that establish the sequence of
the programming steps. For example, if the user wishes to see the words PLEASE ENTER DIAMETER
displayed on the screen immediately, he would execute the command PRINT "PLEASE ENTER
DIAMETER." If, however, that phrase were to appear in a program, the above command would be
preceded by the appropriate line number.
The compiler used in the BASIC language is unlike the compiler used for either FORTRAN or
Pascal. Whereas other HLL compilers check for errors and execute the program as a whole unit, a
BASIC program is checked and compiled line by line during program execution. BASIC is often
referred to as an "interpreted" language as opposed to a compiled one, since it interprets the program
into ML line by line. This condition allows for simplified error debugging. In BASIC, if an error is
detected, it can be corrected immediately, while in FORTRAN and Pascal, the programmer must go
back to the source program in order to correct the problem and then recompile the program as a
separate step. The interpretive nature of BASIC does cause programs to run significantly more slowly
than in either Pascal or FORTRAN.
C. C was developed from the B language by Dennis Ritchie in 1972. C was standardized by
the late 1970s when B. W. Kernighan and Ritchie's book The C Programming Language was pub-
lished. C was developed specifically as a tool for the writing of operating systems and compilers. It
originally became most widely known as the development language for the UNIX operating systems.
C expanded at a tremendous rate over many hardware platforms. This led to many variations and a
lot of confusion and, while these variations were similar, there were notable differences. This was a
problem for developers that wanted to write programs that ran on several platforms. In 1989, the
American National Standards Committee on Computers and Information Processing approved a stan-
Zgłoś jeśli naruszono regulamin