- Suppose our file system uses clusters of 2 KB each and we have a file
whose size is 173,594 bytes.
(a) How many clusters will be needed to store the file?
(b) At the end of the last cluster, how many bytes are not in use?
- Suppose we know that 99% of our users will want to read files and
not modify them. (For instance, maybe we are running an information
kiosk at an airport.) This may have consequences for some of our
software.
(a) Would it be helpful to defragment our hard disks so files are
in contiguous space?
Should we keep a cache of the answers to frequently-asked
questions or the answers to the last 20 queries?
- Suppose we are using round-robin scheduling.
(a) If the time slices are too small, what will be the effect on
performance?
(b) If the time slices are too large, what will be the effect on
performance?
- Suppose the system that schedules disk accesses has the following
problem: The read/write heads are at track 30. Processes have requested
Read operations involving tracks 22, 41 and 6. Each request is for a
small amount of data. Assume that the read/write heads will be move
one track at a time (0.5 ms per move). We would like to minimize the
average waiting time for the processes. In what order should we process
these requests?
- Suppose we are thinking about the access matrix. We decide to use
capability lists. Each user has a home directory, and in the home
directory is hidden file (invisible to the user) listing by name all files
that user is allowed to use and what privileges the user has for the
file. Does this sound like a practical scheme? Can you improve
on it?