سفارش تبلیغ
صبا ویژن

فریاد Faryad

نظر

At the heart of the Internet Protocol (IP) portion of TCP/IP is a concept called the Internet address. This 32-bit coding system assigns a number to every node on the network. There are various types of addresses designed for networks of different sizes, but you can write every address with a series of numbers that identify the major network and the sub-networks to which a node is attached. Besides identifying a node, the address provides a path that gateways can use to route information from one machine to another.

 

Although data-delivery systems like Ethernet or X.25 bring their packets to any machine electrically attached to the cable, the IP modules must know each other"s Internet addresses if they are to communicate. A machine acting as a gateway connecting different TCP/IP networks will have a different Internet address on each network. Internal look-up tables and software based on another standard - called Resolution Protocol - are used to route the data through a gateway between networks.

 

Another piece of software works with the IP-layer programs to move information to the right application on the receiving system. This software follows a standard called the User Datagram Protocol (UDP). You can think of the UDP software as creating a data address in the TCP/IP message that states exactly what application the data block is supposed to contact at the address the IP software has described. The UDP software provides the final routing for the data within the receiving system.

 

The Transmission Control Protocol (TCP) part of TCP/IP comes into operation once the packet is delivered to the correct Internet address and application port. Software packages that follow the TCP standard run on each machine, establish a connection to each other, and manage the communication exchanges. A data-delivery system like Ethernet doesn"t promise to deliver a packet successfully. Neither IP nor UDP knows anything about recovering packets that aren"t successfully delivered, but TCP structures and buffers the data flow, looks for responses and takes action to replace missing data blocks. This concept of data management is called reliable stream service.

 

After TCP brings the data packet into a computer, other high-level programs handle it. Some are enshrined in official US government standards, like the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP). If you use these standard protocols on different kinds of computers, you will at least have ways of easily transferring files and other kinds of data.

 

Conceptually, software that supports the TCP protocol stands alone. It can work with data received through a serial port, over a packet-switched network, or from a network system like Ethernet. TCP software doesn"t need to use IP or UDP, it doesn"t even have to know they exist. But in practice TCP is an integral part of the TCP/IP picture, and it is most frequently used with those two protocols.


The application layer is the only part of a communications process that a user sees, and even then, the user doesn"t see most of the work that the application does to prepare a message for sending over a network. The layer converts a message"s data from human-readable form into bits and attaches a header identifying the sending and receiving computers.

 
The presentation layer ensures that the message is transmitted in a language that the receiving computer can interpret (often ASCII). This layer translates the language, if necessary, and then compresses and perhaps encrypts the data. It adds another header specifying the language as well as the compression and encryption schemes.
 
The session layer opens communications and has the job of keeping straight the communications among all nodes on the network. It sets boundaries (called bracketing) for the beginning and end of the message, and establishes whether the messages will be sent half-duplex, with each computer taking turns sending and receiving, or full-duplex, with both computers sending and receiving at the same time. The details of these decisions are placed into a session header.
 
The transport layer protects the data being sent. It subdivides the data into segments, creates checksum tests - mathematical sums based on the contents of data - that can be used later to determine if the data was scrambled. It can also make backup copies of the data. The transport header identifies each segment"s checksum and its position in the message.
The network layer selects a route for the message. It forms data into packets, counts them, and adds a header containing the sequence of packets and the address of the receiving computer.
 
The data-link layer supervises the transmission. It confirms the checksum, then addresses and duplicates the packets. This layer keeps a copy of each packet until it receives confirmation from the next point along the route that the packet has arrived undamaged.
 
The physical layer encodes the packets into the medium that will carry them - such as an analogue signal, if the message is going across a telephone line - and sends the packets along that medium.
 
An intermediate node calculates and verifies the checksum for each packet. It may also reroute the message to avoid congestion on the network.
 
At the receiving node, the layered process that sent the message on its way is reversed. The physical layer reconverts the message into bits. The data-link layer recalculates the checksum, confirms arrival, and logs in the packets. The network layer recounts incoming packets for security and billing purposes. The transport layer recalculates the checksum and reassembles the message segments. The session layer holds the parts of the message until the message is complete and sends it to the next layer. The presentation layer expands and decrypts the message. The application layer converts the bits into readable characters, and directs the data to the correct application.

 


Linux has its roots in a student project. In 1992, an undergraduate called Linus Torvalds was studying computer science in Helsinki, Finland. Like most computer science courses, a big component of it was taught on (and about) Unix. Unix was the wonder operating system of the 1970s and 1980s: both a textbook example of the principle of operating system design, and sufficiently robust to be the standard OS in engineering and scientific computing. But Unix was a commercial product (licensed by AT&T to a number of resellers), and cost more than a student could pay.

 Annoyed by the shortcomings of Minix (a compact Unix clone written as a teaching aid by Professor Andy Tannenbaum) Linux set out to write his own ‘kernel’ – the core of an operating system that handles memory allocation, talks to hardware devices, and makes sure everything keeps running. He used the GNU programming tools developed by Richard Stallman’s Free Software Foundation, an organization of volunteers dedicated to fulfilling Stallman’s ideal of making good software that anyone could use without paying. When he’d written a basic kernel, he released the source code to the Linux kernel on the Internet.
 
 Source code is important. It’s the original from which compiled programs are generated. If you don’t have the source code to a program, you can’t modify it to fix bugs or add new features. Most software companies won’t sell you their source code, or will only do so for an eye-watering price, because they believe that if they make it available it will destroy their revenue stream.
 
 What happened next was astounding, from the conventional, commercial software industry point of view – and utterly predictable to anyone who knew about the Free Software Foundation. Programmers (mostly academics and students) began using Linux. They found that it didn’t do things they wanted it to do – so they fixed it. And where they improved it, they sent the improvements to Linus, who rolled them into the kernel. And Linux began to grow.
 There’s term for this model of software development; it’s called Open Source. Anyone can have the source code – it’s free (in the sense of free speech, not free beer). Anyone can contribute to it. If you use it heavily you may want to extend or develop or fix bugs in it – and it is so easy to give your fixes back to the community that most people do so.
 An operating system kernel on its own isn’t a lot of use: but Linux was purposefully designed as a near-clone of Unix, and there is a lot of software out there that is free and was designed to compile on Linux. By about 1992, the first ‘distributions’ appeared.
 
 A distribution is the Linux-user term for a complete operating system kit, complete with the utilities and applications you need to make it do useful things – command interpreters, programming tools, text editors, typesetting tools, and graphical user interfaces based on the X windowing system. X is a standard in academic and scientific computing, but not hitherto common on PCs; it’s a complex distributed windowing system on which people implement graphical interfaces like KDE and Gnome.
 
 As more and more people got to know about Linux, some of them began to port the Linux kernel to run on non-standard computers. Because it’s free, Linux is now the most widely-ported operating system there is.

Data mining is simply filtering through large amounts of raw data for useful information that gives businesses a competitive edge. This information is made up of meaningful patterns and trends that are already in the data but were previously unseen.

The most popular tool used when mining is artificial intelligence (AI). AI technologies try to work the way the human brain works, by making intelligent guesses, learning by example, and using deductive reasoning. Some of the more popular AI methods used in data mining include neural networks, clustering, and decision trees.

Neural networks look at the rules of using data, which are based on the connections found or on a sample set of data. As a result, the software continually analyses value and compares it to the other factors, and it compares these factors repeatedly until it finds patterns emerging. These patterns are known as rules. The software then looks for other patterns based on these rules or sends out an alarm when a trigger value is hit.

Clustering divides data into groups based on similar features or limited data ranges. Clusters are used when data isn"t labelled in a way that is favorable to mining. For instance, an insurance company that wants to find instances of fraud wouldn"t have its records labelled as fraudulent or not fraudulent. But after analyzing patterns within clusters, the mining software can start to figure out the rules that point to which claims are likely to be false. Decision trees, like clusters, separate the data into subsets and then analyze the subsets to divide them into further subsets, and so on (for a few more levels). The final subsets are then small enough that the mining process can find interesting patterns and relationships within the data.

Once the data to be mined is identified, it should be cleansed. Cleansing data frees it from duplicate information and erroneous data. Next, the data should be stored in a uniform format within relevant categories or fields. Mining tools can work with all types of data storage, from large data warehouses to smaller desktop databases to flat files. Data warehouses and data mars are storage methods that involve archiving large amounts of data in a way that makes it easy to access when necessary.

When the process is complete, the mining software generates a report. An analyst goes over the report to see if further work needs to be done, such as refining parameters, using other data analysis tools to examine the data, or even scrapping the data if it"s unusable. If no further work is required, the report proceeds to the decision makers for appropriate action.

The power of data mining is being used for many purposes, such as analyzing Supreme Court decisions, discovering patterns in health care, pulling stories about competitors from newswires, resolving bottlenecks in production processes, and analysing sequences in the human genetic makeup. There really is no limit to the type of business or area of study where data mining can be beneficial.


The basic components of a computer system, the input, the output, the memory, and the processor operate only in response to commands from the control unit. The control unit operates by reading one instruction at a time from memory and taking the action called for by each instruction. In this way it controls the flow between main storage and the arithmetic-logical unit.

A control unit has the following components:

a)A counter that selects the instructions, one at a time, from memory.
b)A register that temporarily holds the instruction read from memory while it is being executed.
c)A decoder that takes the coded instruction and breaks it down into the individual commands necessary to carry it out.
d)A clock, which, while not a clock in the sense of a time-keeping device, does produce marks at regular intervals. These timing marks are electronic and very rapid.
 
Binary arithmetic (the kind of arithmetic the computer uses), the logical operations and some special functions are performed by the arithmetic-logical unit. The primary components of the ALU are banks of bi-stable devices, which are called registers. Their purpose is to hold the numbers involved in the calculation and to hold the results temporarily until they can be transferred to memory. At the core of the arithmetic-logical unit is a very high-speed binary adder, which is used to carry out at least the four basic arithmetic functions (addition, subtraction, multiplication, and division).
Typical modern computers can perform as many as one hundred thousand additions of pairs of thirty-two-bit binary numbers within a second. The logical unit consists of electronic circuitry which compares information and makes decisions based upon the results of the comparison. The decisions that can be made are whether a number is greater than (>), equal to (=), or less than (<) another number

 


 It is common practice in computer science for the words ‘computer‘ and ‘processor‘ to be used interchangeably. More precisely, ‘computer‘ refers to the central processing unit (CPU) together with an internal memory. The internal memory or main storage, control and processing components make up the heart of the computer system. Manufacturers design the CPU to control and carry out basic instructions for their particular computer.

 The CPU coordinates all the activities of the various components of the computer. It determines which operations should be carried out and in what order. The CPU can also retrieve information from memory and can store the results of manipulations back into the memory unit for later reference. In digital computers the CPU can be divided into two functional units called the control unit (CU) and the arithmetic-logical unit (ALU). These two units are made up of electronic circuits with millions of switches that can be in one of two states, either on or off.
 The function of the control unit within the central processor is to transmit coordinating control signals and commands. The control unit is that portion of the computer that directs the sequence or step-by-step operations of the system, selects instructions and data from memory, interprets the program instructions, and controls the flow between main storage and the arithmetic-logical unit.
 The arithmetic-logical unit, on the other hand, is that portion of the computer in which the actual arithmetic operations, namely, addition, subtraction, multiplication, division and exponentiation, called for in the instructions are performed.

It also performs some kinds of logical operations such as comparing or selecting information. All the operations of the ALU are under the direction of the control unit.

Programs and the data on which the control unit and the ALU operate, must be in internal memory in order to be processed. Thus, if located on secondary memory devices such as disks or tapes, programs and data are first loaded into internal memory.

 Main storage and the CPU are connected to a console, where manual control operations can be performed by an operator. The console is an important, but special purpose, piece of equipment. It is used mainly when the computer is being started up, or during maintenance-and repair. Many mini and micro systems do not have a console.

 Until the mid1960s, digital computers were powerful, physically large and expensive. What was really needed though, were computers of less power, a smaller memory capacity and without such a large array of peripheral equipment. This need was partially satisfied by the rapid improvement in performance of the semiconductor devices (transistors), and their incredible reduction in size, cost and power; all of which led to the development of the minicomputer or mini for short. Although there is no exact definition of a minicomputer, it is generally understood to refer to a computer whose mainframe is physically small, has a fixed word length between 8 and 32 bits and costs less than U.S. $100,000 for the central processor.

  The amount of primary storage available optionally in minicomputer systems ranges from 32512K* bytes; however, some systems allow this memory to be expanded even further. A large number of peripherals have been developed especially for use in systems built around minicomputers; they are sometimes referred to as mini peripherals. These include magnetic tape cartridges and cassettes, small disk units and a large variety of printers and consoles.

 Many minicomputers are used merely for a fixed application and run only a single program. This is changed only when necessary either correct errors or when a change in the design of the system is introduced. Since the operating environment for most minis is far less varied and complex than large mainframes, it goes without saying that the software and peripheral requirements differ greatly from those of a computer which runs several hundred ever-changing jobs a day. The operating systems of minis also usually provide system access to either a single user or to a limited number of users at a time. Since many minis are employed in real-time processing, they are usually provided with operating systems that are specialized for this purpose.

 For example, most minis have an interrupt feature which allows a program to be interrupted when they receive a special signal indicating that any one of a number of external events, to which they are preprogrammed to respond, has occurred. When the interrupt occurs, the computer stores enough information about the job in process o resume operation after it has responded to the interruption. Because minicomputer systems have been used so often in real-time applications, other aspects of their design have changed; that is, they usually possess the hardware capability to be connected directly to a large variety of measurement instruments, to analog and digital converters, to microprocessors, and ultimately, to an even larger mainframe in order to analyze the collected data.


Large computer systems, or mainframes, as they are referred to in the field of computer science, are those computer systems found in computer installations processing immense amounts of data. These powerful computers make use of very high-speed main memories into which data and programs to be dealt with are transferred for rapid access. These powerful machines have a larger repertoire of more complex instructions which can be executed more quickly. Whereas smaller computers may take several steps to perform a particular operation, a larger machine may accomplish the same thing with one instruction.

These computers can be of two types: digital or analog. The digital computer or general-purpose computer as it is often known, makes up about 90 per cent of the large computers now in use. It gets its name because the data that are presented to it are made up of a code consisting of digits single character numbers. The digital computer is like a gigantic cash register in that it can do calculations in steps, one after another at tremendous speed and with great accuracy. Digital computer programming is by far the most commonly used in electronic data processing for business or statistical purposes.

The analog computer works something like a car speedometer, in that it continuously works out calculations. It is used essentially for problems involving measurements. It can simulate, or imitate different measurements by electronic means. Both of these computer types the digital and the analog are made up of electronic components that may require a large room to accommodate them. At present, the digital computer is capable of doing anything the analog once did. Moreover, it is easier to program and cheaper to operate. A new type of scientific computer system called the hybrid computer has now been produced that combines the two types into one.

Really powerful computers continue to be bulky and require special provision for their housing, refrigeration systems, air filtration and power supplies. This I because much more space is taken up by the input/output devices the magnetic tape and disk units and other peripheral equipment than by the electronic components that do not make up the bulk of the machine in a powerful installation . The power consumption of these machines is also quite high , not to mention the price that runs into hundreds of thousands of dollars . The future will bring great developments in the mechanical devices associated with computer systems . For a long time these have been the weak link , from the point of view of both efficiency and reliability .

 


زبان فنی , • نظر

In order to use computers effectively to solve problems in our environment, computer systems are devised. A ‘system‘ implies a good mixture of integrated parts working together to form a useful whole. Computer systems may be discussed in two parts.

 The first part is hardware - the physical, electronic, and electromechanical devices that are thought of and recognized as ‘computers’. The second part is software - the programs that control and coordinate the activities of the computer hardware and that direct the processing of data.
 Figure 5.1 shows diagrammatically the basic components of computer 10 hardware joined together in a computer system. The centerpiece is called either the computer, the processor, or usually the central processing unit (CPU). The term ‘computer‘ usually refers to those parts of the hardware in which calculations and other data manipulations are performed, and to the internal memory in which data and instructions are stored during the actual execution of programs. The various peripherals, which include input and/or output devices, various secondary memory devices, and so on, are attached to the CPU.
 Computer software can be divided into two very broad categories systems software and applications software. The former is often simply referred to as ‘systems’. These, when brought into internal memory, direct the computer to perform tasks. The latter may be provided along with the hardware by a systems supplier as part of a computer product designed to answer a specific need in certain areas. These complete hardware/software products are called turnkey systems.
 The success or failure of any computer system depends on the skill with which the hardware and software components are selected and blended. A poorly chosen system can be a monstrosity incapable of performing the tasks for which it was originally acquired.

 


زبان فنی , • نظر

Like all machines, a computer needs to be directed and controlled in order to perform a task successfully. Until such time as a program is prepared and stored in the computer’s memory, the computer ’knows’ absolutely nothing, not even how to accept or reject data. Even the most sophisticated computer, no matter how capable it is, must be told what to do. Until the capabilities and the limitations of a computer are recognized, its usefulness cannot be thoroughly understood.

 In the first place, it should be recognized that computers are capable of doing repetitive operations. A computer can perform similar operations thousands of times, without becoming bored, tired, or even careless. Secondly, computers can process information at extremely rapid rates. For example, modern computers can solve certain classes of arithmetic problems millions of times faster than a skilled mathematician. Speeds for performing decision-making operations are comparable to those for arithmetic operations but input-output operations, however, involve mechanical motion and hence require more time. On a typical computer system, cards are read at an average speed of 1000 cards per minute and as many as 1000 lines can be printed at the same rate.
 Thirdly, computers may be programmed to calculate answers to whatever level of accuracy is specified by the programmer. In spite of newspaper headlines such as ‘Computer Fails’, these machines are very accurate and reliable especially when the number of operations they can perform every second is considered. Because they are man-made machines, they sometimes malfunction or break down and have to be repaired. However, in most instances when the computer fails, it is due to human error and is not the fault of the computer at all.
 In the fourth place, general-purpose computers can be programmed to solve various types of problems because of their flexibility. One of the most important reasons why computers are so widely used today is that almost every big problem can be solved by solving a number of little problems one after another. Finally, a computer, unlike a human being, has no intuition. A person may suddenly find the answer to a problem without working out too many of the details, but a computer can only proceed as it has been programmed to.
 Using the very limited capabilities possessed by all computers, the task of producing a university payroll, for instance, can be done quite easily. The following kinds of things need be done for each employee on the payroll. First: Input information about the employee such as wage rate, hours worked, tax rate, unemployment insurance, and pension deductions. Second: Do some simple arithmetic and decision making operations. Third: Output a few printed lines on a cheque. By repeating this process over and over again, the payroll will eventually be completed.