Why stable storage cannot be implemented




















Information stored on the hard drive of a computer is recorded as a series of magnetic impulses; it cannot be read, used, or manipulated without the hardware that stores it. Physical damage to the drive and other types of system corruption, including viruses, can easily undermine the integrity of the information stored on a drive by scrambling or damaging the drive's storage system.

Ensuring stable storage in a computer means constructing an information storage system that is guaranteed against immediate errors following a write operation, which adds to or replaces stored data. Basic commercial hard drives do not qualify as stable storage on their own; however, with software and configuration tools, they can fulfill the demands of stable storage.

In order to be considered stable storage, following a write procedure — in which information is saved to the disk — a drive must be able to immediately read back the exact information that was just written, without errors.

This explains why commercial hard drives fail as stable storage: there is always the possibility that a drive will return an error message following any specific write operation. Some techniques exist to turn commercial hard drives into stable storage devices, though.

Increasing the stability of commercial hard drives is possible through software management techniques. The mount procedure is straightforward. The operating system is given the name of the device and the mount point—the location within the file structure where the file system is to be attached. Typically, a mount point is an empty directory. Ans: Access Methods Files store information. When it is used, this information must be accessed and read into computer memory.

The information in the file can be accessed in several ways. Some systems provide only one access method for files. Other systems, such as those of IBM, support many access methods, and choosing the right one for a particular application is a major design problem. Ans: Directory implementation The selection of directory-allocation and directory-management algorithms significantly affects the efficiency, performance, and reliability of the file system.

In this section, we discuss the trade-offs involved in choosing one of these algorithms. Ans: Swap-Space Use Swap space is used in various ways by different operating systems, depending on the memory-management algorithms in use.

For instance, systems that implement swapping may use swap space to hold an entire process image, including the code and data segments. Paging systems may simply store pages that have been pushed out of main memory.

The amount of swap space needed on a system can therefore vary depending on the amount of physical memory, the amount of virtual memory it is backing, and the way in which the virtual memory is used. It can range from a few megabytes of disk space to gigabytes. Ans: Copy-on-Write we illustrated how a process can start quickly by merely demandpaging in the page containing the first instruction. However, process creation using the fork system call may initially bypass the need for demand paging by using a technique similar to page sharing covered in Section 8.

This technique provides for rapid process creation and minimizes the number of new pages that must be allocated to the newly created process. Ans: File Replication Replication of files on different machines in a distributed file system is a useful redundancy for improving availability.

Multimachine replication can benefit performance too: Selecting a nearby replica to serve an access request results in shorter service time. Ans: Special-Purpose Systems The discussion thus far has focused on general-purpose computer systems that we are all familiar with. There are, however, different classes of computer systems whose functions are more limited and whose objective is to deal with limited computation domains.

Ans: Scheduling Criteria Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms.

Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to percent. In a real system, it should range from 40 percent for a lightly loaded system to 90 percent for a heavily used system. Ans: Thread Scheduling we introduced threads to the process model, distinguishing between user-level and kernel-level threads.

On operating systems that support them, it is kernel-level threads—not processes—that are being scheduled by the operating system. User-level threads are managed by a thread library, and the kernel is unaware of them. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread, although this mapping may be indirect and may use a lightweight process LWP.

In this section, we explore scheduling issues involving user-level and kernel-level threads and offer specific examples of scheduling for Pthreads. There are two primary ways of implementing a thread library. The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library exist in user space.

This means that invoking a function in the library results in a local function call in user space and not a system call. Ans: we illustrate a classic software-based solution to the critical-section problem known as Peterson's solution.

Because of the way modern computer architectures perform basic machine-language instructions, such as load and store, there are no guarantees that Peterson's solution will work correctly on such architectures. However, we present the solution because it provides a good algorithmic description of solving the critical-section problem and illustrates some of the complexities involved in designing software that addresses the requirements of mutual exclusion, progress, and bounded waiting requirements.

Peterson's solution is restricted to two processes that alternate execution between their critical sections and remainder sections. The processes are numbered Po and Pi. Ans: Synchronization Hardware We have just described one software-based solution to the critical-section problem.

In general, we can state that any solution to the critical-section problem requires a simple tool—a lock. Race conditions are prevented by requiring that critical regions be protected by locks. That is, a process must acquire a lock before entering a critical section; it releases the lock when it exits the critical section. Ans: System Model A system consists of a finite number of resources to be distributed among a number of competing processes. The resources are partitioned into several types, each consisting of some number of identical instances.

Similarly, the resource type printer may have five instances. If a process requests an instance of a resource type, the allocation of any instance of the type will satisfy the request.

If it will not, then the instances are not identical, and the resource type classes have not been defined properly. Previous Page. Oct 26, Next Page. Any query?



0コメント

  • 1000 / 1000