Network File System


The Network File System (NFS), initially developed by Sun Microsystems, is a communication protocol and application programming interface which is emerging as a defacto standard for distributed file system services in a heterogeneous computing environment. It permits a partition of a server's file system to be associated with either a device or a subdirectory on a client depending on the file/device model of the client's file systemAlthough NFS was first implemented within a Unix environment, NFS is now implemented within several different operating system environments. File manipulation primitives supported by NFS include: read, write, create a file or directory, remove a file or directory, lookup file name. NFS includes an Application Layer protocol and is usually part of a Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack.
NFS is referred to as a stateless system. This means that the server does not maintain the state of files, from the client's point of view, in file systems mounted by clients. There are no open or close primitives in NFS. Each file manipulation request from the client contains all of the information necessary for the server to complete the request. The server responds fully to every client's request without being aware of the conditions under which the client is making the request. Thus, for example, if the server fails, the client may just continue to request file access until the server is able to respond. Only the client knows the state of a file for which service is requested. In a system where a server maintains the states of files as they are accessed by each client, the failure of a client, a server, or the network is difficult to recover from in an acceptable manner that will restore the states of clients and servers to the conditions in place before the failure.
However, the absence of knowledge on the part of the server concerning what clients are doing to files can lead to unpleasant consequences. For example, one client may have a file open on a server and another client may delete the open file. The server is unaware that a client has the file open. In particular, a fully stateless mechanism cannot be used in database applications. Record and file locking inherently involves managing the current state of a file by a server.
In order to permit database applications, the record locking mechanism specified in the System V Interface Definition (SVID) is supported by another protocol, the Network Lock Manager, which works in conjunction with NFS. The Network Lock Manager uses Status monitors, daemon processes on both clients and servers, to initiate recovery procedures in the event of failure. By means of status monitors, clients and servers notify each other concerning their operational state. If a client fails, then when the client is restarted, the server removes all lock information for that client and the client resubmits all lock requests. If a server fails, then when the server is restarted, it notifies all clients and clients resubmit their lock requests.
Most NFS implementations use Remote Procedure Call. Such implementations usually support the user authentication methods of Remote Procedure Call discussed in section.

Another example of a communication protocol and application programming interface for distributed file systems is Remote File System (RFS) developed by AT&. However, RFS only supports a distributed file system among nodes which have Unix compatible file systems.
RFS is an example of an approach to distributed file systems which is termed stateful, i.e., the server maintains information about the state of the file on the client, such as, whether the file is open. This is necessary in order for RFS to support the full Unix file system semantics. The file systems of many operating systems do not support the semantics of a Unix file system. Participation in a distributed file system implemented using NFS does not require the semantics of a Unix file system. Thus, file systems from many different producers are able to be part of a distributed file system implementation using NFS.
Since RFS supports the full semantics of a Unix file system, the application programming interface for RFS is the Unix input/output (I/O) application programming interface. Since NFS is intended to support operating systems other than Unix, NFS can be described as having two layers of application programming interface. The high layer is the I/O application programming interface of the client operating system. The low layer is the NFS Remote Procedure Calls which provide direct access to the NFS file manipulation primatives. Typically, the NFS client's high layer application programming interface, which provides the file system semantics of the client's operating system, is implemented using the low layer interface. For example, if the client operating system is Unix, then the Unix I/O application programming interface would also provide access to NFS file systems on servers.

Data Robotics Drobo


The classic image of a robot as a lurching metallic automaton has been challenged in recent years by products like iRobot's Roomba home-roaming vacuum, which looks more like an oversized dinner plate than C-3PO. Now comes the Drobo, a device that looks like an ordinary external hard drive or network attached storage (NAS) unit but which Data Robotics dubs "the world's first storage robot."
That may sound like marketing puffery, but based on Dictionary.com's definition of a robot -- a mechanical device that operates automatically with human-like skill -- it may not be far off the mark. It turns out that the $499 Drobo provides storage that's considerably easier to configure and upgrade than any device we've used before.
Its compact size and rectangular shape make the Drobo look like a toaster. Slices of bread -- in the form of standard 3.5-inch internal Serial ATA I or II hard drives, of any capacity, from any manufacturer -- go into four bays that you access by pulling off the Drobo's magnetically attached front cover. Unlike most storage devices with user-accessible bays, Drobo does not require special mounting trays, connectors, or cables to connect the drives -- you simply slide them into each slot until they click into place. Buying the drives is up to you; Drobo's $499 price includes only the chassis.
A large fan channels hot air from the rear of the Drobo. We wouldn't necessarily call it loud, but the fan noise periodically became noticeable over the din of the nearby PC. You connect the Drobo directly to a PC or Mac (using NTFS or HFS+ formatting respectively) via USB 2.0 only. We'd like the device even more if it let us connect through FireWire, eSATA, or even Ethernet; Data Robotics merely says that models with additional interfaces might appear in the future.
You can network the Drobo by connecting it to a USB storage server or NAS device, but since it doesn't support the FAT32 format, this scenario doesn't support full access in a mixed PC/Mac environment. (A Mac can read but not write NTFS, and Windows doesn't understand Apple's HFS+ formatting at all.)
Just Put That Anywhere
When you think of multiple-hard-drive storage, you think of RAID. But while the Drobo does use RAID-style techniques to organize and protect data, it doesn't do it in the customary way. Most notably, RAID requires drives of identical capacity, but the Drobo has no such limitation. Instead, it uses a storage scheme that aggregates different-sized drives into a single volume.
While standard RAID mirrors one disk on another (RAID 1) or stripes parity data across three or more disks (RAID 5), the method the Drobo uses to safeguard your data depends on how many drives you use, what their capacities are, and how much data is on them. The Drobo protects data files rather than disks, so depending on the aforementioned variables it can either mirror or parity-stripe your data. In many cases, it will do both -- mirroring one set of data and striping across disks.
Drobo's approach to data protection reserves a significant chunk of your overall capacity in order to let the unit reorganize your data as needed. This means that there's always free space where Drobo can relocate data in the event of a drive failure. Another huge upside is that this lets you increase the system's capacity whenever you need to, simply by replacing any drive with a larger one. The Drobo can incorporate the added capacity into its storage scheme without any data disruption requiring you to manually rebuild volumes.
Determining how much capacity is left on a standard RAID device after subtracting space used for data protection is a relatively simple calculation -- you lose half of the total with RAID 1 or 10, or 1/x-- where x is the number of drives -- for RAID 5. Figuring out the usable space on the Drobo isn't quite as straightforward, especially with varying drive capacities. A general rule of thumb is to subtract the capacity of the largest drive used; Data Robotics' Web site offers a graphical "Drobolator" utility that lets you calculate usable capacity more precisely.

Quantum Networks

Quantum Key Distribution (QKD) is an emerging technology that uses the fundamental laws of quantum physics in order to ensure secure communication. Quantum networks enable secure distribution of quantum crypto keys among multiple users in a commercial network infrastructure.

What are the issues we are trying to address?

Our nation's business and defense require secure transmission of information over communication links. There is a continuing need to develop advanced technologies to safeguard data transmission and communications. QKD has been shown to be an effective cryptography key distribution mechanism when quantum computing becomes a reality, but it is still a developing technology. In order to make QKD practical, we need to improve its performance in terms of key generation speed, transmission distance, and lowering error rates. We need to develop measurement methodologies and metrics for the new system, and new protocols and standards for the new QKD infrastructure.

What are we doing to address these issues?

We have built an open system for research, testing, calibrations, and technology development in a real-world telecommunications environment. We are developing a testbed and measurement infrastructure for testing new photon sources and detectors, and new methods for transmitting quantum keys over standard telecom infrastructures.

Accomplishments and future outlook for Quantum Networks?

In 2000 the NIST Information Technology Laboratory (ITL), in collaboration with the NIST Physics Laboratory and with the support of DARPA, initiated a project to build the infrastructure for a high-speed Quantum Key Distribution (QKD) system using a free space link. By 2004 we demonstrated free space QKD over 730 meters at a key rate of 1 Mbit/s.
In 2005 ITL began to research QKD in fiber, and by 2006 we had developed a fiber channel QKD system with 4.14 Mbits/s key rates at over 1 km of fiber while maintaining a quantum bit error rate (QBER) as low as 3.4%. Also, by 2006 we demonstrated QKD transmission using telecom wavelengths for optimal distances, built a novel frequency up-conversion module with very low noise for optimal transmission and detection of photons, and demonstrated a three-user QKD network (one Alice and two Bobs), suitable for QKD local-area-networks (LANs). We will be extending the range to cover Metropolitan-area-networks (MANs).
For more information concerning this program, please contact project leader Dr. Xiao Tang (xiao.tang@nist.gov).
Keywords: quantum communication, cryptography & key distribution (QKD), BB84, free space optics, photon source/dectors.