Ant-32 is a new processor architecture designed specifically to address the pedagogical needs of teaching many subjects, including assembly language programming, machine architecture, compilers, operating systems, and VLSI design. This paper discusses our motivation for creating Ant-32 and the philosophy we used to guide our design decisions and gives a high-level description of the resulting design.
The Direct Access File System (DAFS) is an emerging industrial standard for network-attached storage. DAFS takes advantage of new user-level network interface standards. This enables a user-level file system structure in which client-side functionality for remote data access resides in a library rather than in the kernel. This structure addresses longstanding performance problems stemming from weak integration of buffering layers in the network transport, kernel-based file systems and applications. The benefits of this architecture include lightweight, portable and asynchronous access to network storage and improved application control over data movement, caching and prefetching.
This paper explores the fundamental performance characteristics of a user-level file system structure based on DAFS. It presents experimental results from an open-source DAFS prototype and compares its performance to a kernel-based NFS implementation optimized for zero-copy data transfer. The results show that both systems can deliver file access throughput in excess of 100 MB/s, saturating network links with similar raw bandwidth. Lower client overhead in the DAFS configuration can improve application performance by up to 40% over optimized NFS when application processing and I/O demands are well-balanced.
This paper presents a new instructional operating system, OS/161, and simulated execution environment, System/161, for use in teaching an introductory undergraduate operating systems course. We describe the new system, the assignments used in our course, and our experience teaching using the new system.
This paper addresses the problem of forming groups in peer-to-peer (P2P) systems and examines what dependability means in decentralized distributed systems. Much of the literature in this field assumes that the participants form a local picture of global state, yet little research has been done discussing how this state remains stable as nodes enter and leave the system. We assume that nodes remain in the system long enough to benefit from retaining state, but not sufficiently long that the dynamic nature of the problem can be ignored. We look at the components that describe a system’s dependability and argue that next-generation decentralized systems must explicitly delineate the information dispersal mechanisms (e.g., probe, event-driven, broadcast), the capabilities assumed about constituent nodes (bandwidth, uptime, re-entry distributions), and distribution of information demands (needles in a haystack vs. hay in a haystack [lv02gnutella]. We evaluate two systems based on these criteria: Chord [stoica01chord] and a heterogeneous-node hierarchical grouping scheme [ledlie02namequeries]. The former gives a greater than 1 failed request rate under normal P2P conditions and a prototype of the latter a similar rate under more strenuous conditions with an order of magnitude more organizational messages. This analysis suggests several methods to greatly improve the prototype.
[lv02gnutella] Qin Lv, Sylvia Ratnasamy, Scott Shenker: Can Heterogeneity Make Gnutella Scalable?, Proceedings of the First International Workshop on Peer-to-Peer Systems (IPTPS 2002), Cambridge, MA, March 2002.
[stoica01chord] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan: Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications, Proceedings of the ACM SIGCOMM 2001 Conference, August 2001.
[ledlie02namequeries] Jonathan Ledlie, Laura Serban, and Dafina Toncheva: Scaling Filename Queries in a Large-Scale Distributed File System, Harvard University Technical Report TR-03-02, January 2002.
We have examined the tradeoffs in applying regular and Compressed Bloom filters to the name query problem in distributed file systems and developed and tested a novel mechanism for scaling queries as the network grows large. Filters greatly reduced query messages when using Fan’s "Summary Cache" in web cache hierarchies [fan00summary], a similar albeit smaller, searching problem. We have implemented a testbed that models a distributed file system and run experiments that test various configurations of the system to see if Bloom filters could provide the same kind of improvements. In a realistic system, where the chance that a randomly queried node holds the file being searched for is low, we show that filters always provide lower bandwidth/search and faster time/search, as long as the rates of change of the files stored at the nodes is not extremely high relative to the number of searches. In other words, we confirm the intuition that keeping some state about the contents of the rest of the system will aid in searching as long as acquiring this state is not overly costly and it does not expire too quickly.
The grouping topology we have developed divides n nodes into log(n) groups, each of which has a representative node that aggregates a composite filter for the group. All nodes not in that group use this low-precision filter to weed out whole collections of nodes by probing these filters, only sending a search to be proxied by a member of the group if the probe of the group filter returns positively. Proxied searches are then carried out within a group, where more precise (more bits per file) filters are kept and exchanged between the n/(log(n)) nodes in a group. Experimental results show that both bandwidth/search and time/search are improved with this novel grouping topology.
[fan00summary] Li Fan, Pei Cao, Jussara Almeida, Andrei Z. Broder: Summary cache: a Scalable Wide-Area Web Cache Sharing Protocol, IEEE/ACM Transactions on Networking, 2000.
At the 1999 Workshop on Hot Topics in Operating Systems (HotOS VII), the attendees reached consensus that the most important issue facing the OS research community was "No-Futz" computing; eliminating the ongoing "futzing" that characterizes most systems today. To date, little research ahs been accomplished in this area. Our goal in writing this paper is to focus the research community on the challenges we face if we are to design systems that are truly futz-free, or even low-futz.
As discussed in our 2000 USENIX paper, lottery scheduling’s currency abstraction imposes lower limits on the resource allocations that clients of a system can receive. In our USENIX paper, we proposed one method for overcoming these lower limits. In this addendum to the paper, we discuss a limitation of our original solution to the lower-limits problem, and we propose an alternate solution that overcomes the shortcomings of the original solution.
Proportional-share resource management is becoming increasingly important in today’s computing environments. In particular, the growing use of the computational resources of central service providers argues for a proportional-share approach that allows resource principals to obtain allocations that reflect their relative importance. In such environments, resource principals must be isolated from one another to prevent the activities of one principal from impinging on the resource rights of others. However, such isolation limits the flexibility with which resource allocations can be modified to reflect the actual needs of applications. We present extensions to the lottery-scheduling resource management framework that increase its flexibility while preserving its ability to provide secure isolation. To demonstrate how this extended framework safely overcomes the limits imposed by existing proportional-share schemes, we have implemented a prototype system that uses the framework to manage CPU time, physical memory, and disk bandwidth. We present the results of experiments that evaluate the prototype, and we show that our framework has the potential to enable server applications to achieve significant gains in performance.