Network File System


The Network File System (NFS), initially developed by Sun Microsystems, is a communication protocol and application programming interface which is emerging as a defacto standard for distributed file system services in a heterogeneous computing environment. It permits a partition of a server's file system to be associated with either a device or a subdirectory on a client depending on the file/device model of the client's file systemAlthough NFS was first implemented within a Unix environment, NFS is now implemented within several different operating system environments. File manipulation primitives supported by NFS include: read, write, create a file or directory, remove a file or directory, lookup file name. NFS includes an Application Layer protocol and is usually part of a Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack.
NFS is referred to as a stateless system. This means that the server does not maintain the state of files, from the client's point of view, in file systems mounted by clients. There are no open or close primitives in NFS. Each file manipulation request from the client contains all of the information necessary for the server to complete the request. The server responds fully to every client's request without being aware of the conditions under which the client is making the request. Thus, for example, if the server fails, the client may just continue to request file access until the server is able to respond. Only the client knows the state of a file for which service is requested. In a system where a server maintains the states of files as they are accessed by each client, the failure of a client, a server, or the network is difficult to recover from in an acceptable manner that will restore the states of clients and servers to the conditions in place before the failure.
However, the absence of knowledge on the part of the server concerning what clients are doing to files can lead to unpleasant consequences. For example, one client may have a file open on a server and another client may delete the open file. The server is unaware that a client has the file open. In particular, a fully stateless mechanism cannot be used in database applications. Record and file locking inherently involves managing the current state of a file by a server.
In order to permit database applications, the record locking mechanism specified in the System V Interface Definition (SVID) is supported by another protocol, the Network Lock Manager, which works in conjunction with NFS. The Network Lock Manager uses Status monitors, daemon processes on both clients and servers, to initiate recovery procedures in the event of failure. By means of status monitors, clients and servers notify each other concerning their operational state. If a client fails, then when the client is restarted, the server removes all lock information for that client and the client resubmits all lock requests. If a server fails, then when the server is restarted, it notifies all clients and clients resubmit their lock requests.
Most NFS implementations use Remote Procedure Call. Such implementations usually support the user authentication methods of Remote Procedure Call discussed in section.

Another example of a communication protocol and application programming interface for distributed file systems is Remote File System (RFS) developed by AT&. However, RFS only supports a distributed file system among nodes which have Unix compatible file systems.
RFS is an example of an approach to distributed file systems which is termed stateful, i.e., the server maintains information about the state of the file on the client, such as, whether the file is open. This is necessary in order for RFS to support the full Unix file system semantics. The file systems of many operating systems do not support the semantics of a Unix file system. Participation in a distributed file system implemented using NFS does not require the semantics of a Unix file system. Thus, file systems from many different producers are able to be part of a distributed file system implementation using NFS.
Since RFS supports the full semantics of a Unix file system, the application programming interface for RFS is the Unix input/output (I/O) application programming interface. Since NFS is intended to support operating systems other than Unix, NFS can be described as having two layers of application programming interface. The high layer is the I/O application programming interface of the client operating system. The low layer is the NFS Remote Procedure Calls which provide direct access to the NFS file manipulation primatives. Typically, the NFS client's high layer application programming interface, which provides the file system semantics of the client's operating system, is implemented using the low layer interface. For example, if the client operating system is Unix, then the Unix I/O application programming interface would also provide access to NFS file systems on servers.

Data Robotics Drobo


The classic image of a robot as a lurching metallic automaton has been challenged in recent years by products like iRobot's Roomba home-roaming vacuum, which looks more like an oversized dinner plate than C-3PO. Now comes the Drobo, a device that looks like an ordinary external hard drive or network attached storage (NAS) unit but which Data Robotics dubs "the world's first storage robot."
That may sound like marketing puffery, but based on Dictionary.com's definition of a robot -- a mechanical device that operates automatically with human-like skill -- it may not be far off the mark. It turns out that the $499 Drobo provides storage that's considerably easier to configure and upgrade than any device we've used before.
Its compact size and rectangular shape make the Drobo look like a toaster. Slices of bread -- in the form of standard 3.5-inch internal Serial ATA I or II hard drives, of any capacity, from any manufacturer -- go into four bays that you access by pulling off the Drobo's magnetically attached front cover. Unlike most storage devices with user-accessible bays, Drobo does not require special mounting trays, connectors, or cables to connect the drives -- you simply slide them into each slot until they click into place. Buying the drives is up to you; Drobo's $499 price includes only the chassis.
A large fan channels hot air from the rear of the Drobo. We wouldn't necessarily call it loud, but the fan noise periodically became noticeable over the din of the nearby PC. You connect the Drobo directly to a PC or Mac (using NTFS or HFS+ formatting respectively) via USB 2.0 only. We'd like the device even more if it let us connect through FireWire, eSATA, or even Ethernet; Data Robotics merely says that models with additional interfaces might appear in the future.
You can network the Drobo by connecting it to a USB storage server or NAS device, but since it doesn't support the FAT32 format, this scenario doesn't support full access in a mixed PC/Mac environment. (A Mac can read but not write NTFS, and Windows doesn't understand Apple's HFS+ formatting at all.)
Just Put That Anywhere
When you think of multiple-hard-drive storage, you think of RAID. But while the Drobo does use RAID-style techniques to organize and protect data, it doesn't do it in the customary way. Most notably, RAID requires drives of identical capacity, but the Drobo has no such limitation. Instead, it uses a storage scheme that aggregates different-sized drives into a single volume.
While standard RAID mirrors one disk on another (RAID 1) or stripes parity data across three or more disks (RAID 5), the method the Drobo uses to safeguard your data depends on how many drives you use, what their capacities are, and how much data is on them. The Drobo protects data files rather than disks, so depending on the aforementioned variables it can either mirror or parity-stripe your data. In many cases, it will do both -- mirroring one set of data and striping across disks.
Drobo's approach to data protection reserves a significant chunk of your overall capacity in order to let the unit reorganize your data as needed. This means that there's always free space where Drobo can relocate data in the event of a drive failure. Another huge upside is that this lets you increase the system's capacity whenever you need to, simply by replacing any drive with a larger one. The Drobo can incorporate the added capacity into its storage scheme without any data disruption requiring you to manually rebuild volumes.
Determining how much capacity is left on a standard RAID device after subtracting space used for data protection is a relatively simple calculation -- you lose half of the total with RAID 1 or 10, or 1/x-- where x is the number of drives -- for RAID 5. Figuring out the usable space on the Drobo isn't quite as straightforward, especially with varying drive capacities. A general rule of thumb is to subtract the capacity of the largest drive used; Data Robotics' Web site offers a graphical "Drobolator" utility that lets you calculate usable capacity more precisely.

Quantum Networks

Quantum Key Distribution (QKD) is an emerging technology that uses the fundamental laws of quantum physics in order to ensure secure communication. Quantum networks enable secure distribution of quantum crypto keys among multiple users in a commercial network infrastructure.

What are the issues we are trying to address?

Our nation's business and defense require secure transmission of information over communication links. There is a continuing need to develop advanced technologies to safeguard data transmission and communications. QKD has been shown to be an effective cryptography key distribution mechanism when quantum computing becomes a reality, but it is still a developing technology. In order to make QKD practical, we need to improve its performance in terms of key generation speed, transmission distance, and lowering error rates. We need to develop measurement methodologies and metrics for the new system, and new protocols and standards for the new QKD infrastructure.

What are we doing to address these issues?

We have built an open system for research, testing, calibrations, and technology development in a real-world telecommunications environment. We are developing a testbed and measurement infrastructure for testing new photon sources and detectors, and new methods for transmitting quantum keys over standard telecom infrastructures.

Accomplishments and future outlook for Quantum Networks?

In 2000 the NIST Information Technology Laboratory (ITL), in collaboration with the NIST Physics Laboratory and with the support of DARPA, initiated a project to build the infrastructure for a high-speed Quantum Key Distribution (QKD) system using a free space link. By 2004 we demonstrated free space QKD over 730 meters at a key rate of 1 Mbit/s.
In 2005 ITL began to research QKD in fiber, and by 2006 we had developed a fiber channel QKD system with 4.14 Mbits/s key rates at over 1 km of fiber while maintaining a quantum bit error rate (QBER) as low as 3.4%. Also, by 2006 we demonstrated QKD transmission using telecom wavelengths for optimal distances, built a novel frequency up-conversion module with very low noise for optimal transmission and detection of photons, and demonstrated a three-user QKD network (one Alice and two Bobs), suitable for QKD local-area-networks (LANs). We will be extending the range to cover Metropolitan-area-networks (MANs).
For more information concerning this program, please contact project leader Dr. Xiao Tang (xiao.tang@nist.gov).
Keywords: quantum communication, cryptography & key distribution (QKD), BB84, free space optics, photon source/dectors.

Cisco to Build Social Networks for Others


Cisco wants in on the social networking pie. It’s announced its plans for creating social networking websites for other companies at the Web 2.0 Conference this week, in what looks to be a white label offering.
With its acquisition of Five Across Inc., a company that offers website creation tools, and the technology from social network Tribe, it’s clear that Cisco is looking to social networking as a substantial stream of revenue. And not for its own social network, but for those businesses that would like to create social networking options to present to end users. The NHL is already a client of Five Across Inc., and has a social network for users to meet and share online, which reportedly gains more traffic than its main website.
Sponsorship of MTV’s Digital Incubator and investment in TokBox and BlackArrow also indicate a heavy interest in the development of online networks and applications. No details have been given regarding Cisco’s plans for the software, but with all the other networks opening their platform, perhaps Cisco could be in a position to offer an aggregated solution for distributing applications across its white label networks. Someone will do it soon.
At any rate, Cisco’s announcement follows those made at the conference by MySpace and Microsoft earlier this week, with plans for an open platform and the acquisition of 20 companies per year, respectively.

Scientist who first synthesised DNA


Arthur Kornberg, the Stanford University Nobel laureate who first synthesised DNA in a test tube and whose identification of the enzymes used by cells to manufacture DNA laid the basis for the biotechnology industry, died of respiratory failure on Friday at Stanford Hospital. He was 89.
A prolific researcher, Kornberg also created the Stanford University School of Medicine's biochemistry department, bringing in a talented group of scientists who worked together for nearly half a century.

Kornberg lived to see his son Roger win the 2006 Nobel Prize in Chemistry.
It is often hard to conceive how little was known about the mysterious DNA molecule when Kornberg began his research in the 1950s. Scientists were pretty sure that it was the repository of genetic information. Beyond that, DNA was a black hole.
In the second world war Kornberg developed an interest in enzymes, the large proteins used by cells to carry out chemical reactions, especially the synthesis of substances used by cells.
After preliminary work isolating enzymes involved in vitamin production, Kornberg tackled the more difficult challenge of DNA and RNA, the messenger molecule used by cells in the conversion of genetic information contained in DNA into proteins.
Kornberg reasoned that cells would produce DNA by stringing together pre-made nucleotides – combinations of a base, a sugar molecule and a phosphate group.
While Kornberg was working on the project in 1953, James Watson and Francis Crick published the structure of DNA, providing clues to direct his efforts. By the following year, Kornberg and his colleagues had isolated the enzymes used to produce the nucleotides used in RNA and DNA.
By 1957, Kornberg had discovered and purified the key molecule, called DNA polymerase, and submitted two papers describing the work to the Journal of Biological Chemistry. Referees, however, objected to calling the material produced by the enzyme DNA.
Disgusted, Kornberg withdrew the papers, but they were published the following year when the journal appointed a new editor.
His work confirmed speculation by Watson and Crick that genetic information was encoded in opposite directions on the two strands of double-helical DNA.
In 1959, Kornberg shared the Nobel Prize in Physiology or Medicine for the synthesis of DNA.

SembCorp Marine hit by forex

Shares in Singapore’s SembCorp Marine fell more than 15 per cent on Tuesday after the world’s number two offshore oil rig maker said it could lose up to $248m from alleged unauthorised forex trading by its finance director.
SembCorp Marine said Wee Sing Guan, the group finance director, had been relieved from duty. Mr Wee has also resigned from the directorships of SembMarine units, including Jurong Shipyard, on whose account the company claims the currency trades were made.

The potential losses for SembCorp Marine include $83m Jurong Shipyard already paid for forex losses and up to $165m on estimated unrealised losses. The total amount would exceed SembMarine’s profits last year of S$234m ($159.7m).
SembCorp Marine on Tuesday said it had sold part of its stake in Cosco Corp (Singapore), which has several ship and offshore oil rig building yards in China, for a gain of S$230m, which would mitigate the impact of the forex losses on this year’s results.
The company reported a 62 per cent rise in earnings to S$158m in the first half on the back of a boom in global demand for offshore oil rigs.
SembMarine said it did not allow speculative forex trading, although it does hedge against currency risks. The US dollar has suffered an unexpectedly sharp 7 per cent fall against the Singapore dollar in the past year, which may have accounted for the forex losses.
The company added that steps had been taken “to prevent the entry of any further unauthorised transactions”.
Temasek, Singapore’s state-owned investment agency, owns 49 per cent of SembCorp Industries, which in turn has 61.5 per cent of SembCorp Marine. Shares in SembCorp Marine fell 15.4 per cent to S$4.74, while SembCorp Industries dropped nearly 5 per cent to S$6. Cosco closed down 2.7 per cent at S$7.30.
The disclosure of the alleged forex trades took investors by surprise since Temasek-linked companies are known for tight financial controls.
Drew & Napier, a local law firm, and Ernst & Young, the accounting group, are set to conduct a probe into the transactions on Semb-Corp Marine’s behalf.
Three local brokerages cut their ratings on SembCorp Marine following the disclosure of the forex losses in spite of the fact that the company’s fundamentals are seen as strong due to high oil prices, which have led to increased orders for offshore oil rigs

Retail investors keep faith with forex

Japanese retail investors are pouring into the yen carry trade despite being battered by the recent sharp rise in the country's currency against the US dollar.
For many individual traders, the yen's surge against the dollar has been a rude awakening to the dangers of forex trading. Stories abound of housewives losing their lifetime savings within days as the yen value of highly leveraged dollar holdings plummeted.
Contrary to expectations, the currency market turmoil has not significantly dented the appetite of Mr and Mrs Watanabe to play the foreign exchange markets.
According to research by Yano Research Institute, the number of foreign exchange margin trading accounts almost doubled last year to 644,802 and is poised to grow another 62 per cent in the year to March, to 1.05m accounts.
Although the survey was published in June, long before the latest yen up-swing, Kazuhiro Shirakura, senior researcher at Yano, said: "The number of acc-ounts has not been affected at all [by the yen's recent rise] and has continued to increase." This was because new forex traders usually began by buying foreign currencies and the strong yen made it a good environment to start trading, he said.
Even traders who had been hurt by the yen's rise had generally taken a long-term view and stayed in the market, said Junichi Katsuno in the financial markets division at Himawari Shoken, which specialises in foreign exchange trading.
"There are many people who take a long-term view and are waiting to buy [dollars]," he said. "There aren't that many people who expect the yen to strengthen that much more."
Retail investors were generally willing to participate in the yen carry trade as long as the yen did not rise above Y105 to the US dollar, Mr Katsuno said.
Individual traders are also becoming more sophisticated in their trading strategy.
Since the summer, instead of taking large positions in one currency, which can lead to huge losses, investors are taking smaller positions and trading frequently and flexibly to minimise any potential losses.
What is more, not all retail traders have lost out in the latest currency turmoil.
One housewife who uses the online handle "Forex-loving Mama" to blog about her trading, said she had made profits by taking both short and long positions in the more than 20 currency pairs she tracked daily.
The woman, who declined to be identified, said she closed her positions each day before going to bed and was not phased by the latest currency moves that had left many retail investors nursing huge losses.
She also expects other retail traders to stay in the market. "Surprisingly, there aren't many people who say they will quit. I think it's because people remember how good it can be. It's like a gambling addiction," she said.

VC returns: reversion to the mean?

VC returns: reversion to the mean?
The pain from the long dotcom hangover is finally starting to recede into the past, at least when it comes to venture capital returns. About time, too. But it is still far too soon to tell whether historic long-term profits from VC investment will hold up.
The story is told in the chart below. The thick broken line at the bottom shows five-year venture capital returns in the US. As write-offs from the dotcom disaster have receded and profitable exits are being found for the the companies that survived, this line has finally crept back into positive territory (the latest figures were put out today by the NVCA.)


The most encouraging part of this chart is the thin broken line in the middle: despite the boom and bust, 20-year returns from start-up financing have stayed remarkably solid, at around 16 per cent a year.
But will that continue to hold good for the next 20 years? The top line shows how ten-year returns, which had been boosted by the bubble, are sinking back towards the norm. The supply and demand equation in venture financing looks very different than it did back in the mid-1990s, with many more funds and many more investors still scrambling to get in. That surely points, eventually, to long-term returns below the historic mean.

Phoenix from the flames


Phoenix from the flames
My usual routine for booting my home computer is to turn it on, then go away, make a cup of tea and come back 10 to 15 minutes later, knowing it’s safe by then to click something and get some kind of response.
The only fast part of the whole process is the initial boot of the BIOS, the piece of software that sits on a memory chip and checks everything is powered on, then tells the PC where to find the operating system.
The name that briefly flashes up on my screen and on millions of others is Phoenix Technologies, the leading maker of this system firmware.
Phoenix had been in decline until a new management team came in a year ago with a plan to revitalise its product line. Its biggest step so far is the HyperSpace platform announced today, in what amounts to a reinvention of the humble role of the BIOS.
HyperSpace enhances the BIOS by adding a mini operating system of its own that should provide almost instant access to web browsers, email programs, instant messaging clients and media players that are written for it.
Phoenix is using new virtualisation capabilities being built into Intel and AMD chips, which allow such environments to be compartmentalised for greater security.
Applications have yet to be developed for the platform and Phoenix could face competition from other software companies, including Microsoft, by the time of HyperSpace’s full launch on notebook PCs in the second half of 2008.
But Rich Arnold, chief strategy officer, expects Phoenix will find enough partners to exploit HyperSpace’s instant-on capabilities for people wanting fast access to standard applications while on the go.
“We want to be a Blackberry inside the PC that has all the same functionality,” he says.
“This will give access to just the applications you really need and probably give you an hour of extra battery life.”

Sony's $100m HD campaign


Sony's $100m HD campaign
Despite fires in San Diego closing its factory there and forcing evacuations of its workers, Sony Electronics says it has now caught up with orders and is expecting a bumper holiday season in the US.
“It could be the best holiday season in the last couple of years,” Stan Glasgow, its president in the US, told us at a press event in San Francisco on Monday night.
His confidence is based on orders from retailers for Sony’s current third quarter and he also expects returns from a $100m marketing campaign in the US.
“You will see more advertising than we have ever done before,” he said. The emphasis will be on High Definition , with Sony’s HDNA campaign prominent.
Sony also showed off the new version of its Sony Reader and an 8Gb Walkman that competes with the iPod Nano. Significantly for Sony, its proprietary Atrac format is gone and it now seems more open format than Apple, with WMA, MP3 and AAC music files supported.
Mr Glasgow said Sony’s announcements at the Consumer Electronics Show in Las Vegas in January would emphasise integration between Sony’s different product lines – a key goal of Sony’s chief executive, Sir Howard Stringer.
His US president was particularly critical of supporters of the HD-DVD standard, which competes with Sony’s Blu-ray DVD players. He said Toshiba and perhaps Wal-Mart had substantially subsidised an HD-DVD player that went on sale for $99 at the retailer last week.
He also claimed that HD-DVD supporters had paid the Hollywood studios as much as $500m in incentives to bring out movies in that format. The studios are also reported to have asked for incentives from Sony and the New York Times reported in August that Paramount and Dreamworks alone had received $150m for choosing HD-DVD over Blu-ray for their releases.

The evolution of PC Virus 7

How to choose nonsignature protection
Currently, most security solutions combine several different technologies. Classic antivirus programs often use signature detection in combination with some form of system event monitoring, an emulator and a sandbox. So what should you look for in order to find protection that best suits your specific needs?
First of all, keep in mind that there is no such thing as a universal solution or a ‘best’ solution. Each technology has advantages and drawbacks. For example, monitoring system events constantly takes up a lot of processor time, but this method is the toughest to trick. Malware can circumvent the emulation process by using certain commands in its code, but if those commands are used, the malicious code will be detected preemptively i.e. the system remains untouched. Another example: simple decision-making rules require too much input from the user, who will be required to answer a multitude of questions, whereas more complex decision-making rules, which do not require so much user input, give rise to multiple false positives.
Selecting technologies means choosing the golden mean; that is, picking a solution by taking specific demands and conditions into account. For example, those who work in vulnerable conditions (with an unpatched system, no restrictions on using browser add-ons, scripts, etc.) will be very concerned about security and will have sufficient resources to implement appropriate security measures. A sandbox-type system with a quality analytical component will best suit this kind of user. This type of system offers maximum security, but given current conditions, it will eat up a lot of RAM and processor time, which could slow the operating system beyond acceptable levels. On the other hand, an expert who wants to control all critical system events and protect him/ herself from unknown malicious programs will do well with a real-time system monitor. This kind of system works steadily, but does not overload the operating system, and it requires user input to create rules and exceptions. Finally, a user who either has limited resources or does not want to overload his system with constant monitoring, and who does want the option to create rules, will be best served by simple heuristics. Ultimately, it’s not a single component that ensures quality detection of unknown malicious programs, but the security solution as a whole. A sophisticated decision making method can compensate for more simple technologies.
Nonsignature systems used to detect previously unknown malicious code fall into two categories. The first include stand-alone HIPS systems, such as Prevx and Cyberhawk. The second group includes leading antivirus products, which in their continued evolution towards greater effectiveness have come to use nonsignature technologies. The advantages of one over the other are obvious: the first category offers a highly dedicated solution which has unlimited potential for improvement in terms of quality. The second makes use of the wealth of experience stemming from the multi-faceted battle against malicious programs.
In choosing a new product, the best recommendation is to trust personal impressions and independent test results.

The evolution of PC Virus 6

Different malware detection methods: the pros and cons
If we examine technologies that protect against malware as a group rather than individually, and using the model introduced in this article, the following picture emerges.
The technical component of a technology is responsible for features such as how resource-hungry a program is (and as a result, how quickly it works), security and protection.
A program’s resource requirements is the share of processor time and RAM required either continually or periodically to ensure protection. If software requires a lot of resources, it may slow down system performance. Emulators run slowly: regardless of implementation, each emulated instruction will create several instructions in the artificial environment. The same goes for virtualization. System event monitors also slow systems down, but the extent to which they do so depends on the implementation. As far as file detection and system anomaly detection are concerned, the load on the system is also entirely dependent on implementation.
Security is the level of risk which the operating system and user data will be subjected to during the process of identifying malicious code. This risk is always present when malicious code is run in an operating system. The architecture of system event monitors means that malicious code has to be run before it can be detected, whereas emulators and file scanners may detect malicious code before it is executed.
Protection reflects the extent to which a technology may be vulnerable, or how easy it may be for a malicious program to hinder detection. It is very easy to combat file detection: it's enough to pack a file, make it polymorphic, or use rootkit technology to disguise a file. It's a little tougher to circumvent emulation, but it is still possible; a virus writer simply has to build any of a range of tricks3 into the malicious program’s code. On the other hand, it's very difficult for malware to hide itself from a system event monitor, because it's nearly impossible to mask a behaviour.
In conclusion, the less abstract the form of protection, the more secure it will be. The caveat: the less abstract the form of protection, the easier it will be for malware to circumvent.
The analytical aspect of a technology is responsible for features such as proactivity (and the consequent impact on the necessity for frequent antivirus database updates), the false positive rate and the level of user involvement.
Proactivity refers to a technology’s ability to detect new, as yet unidentified malicious programs. For example, the simplest type of analysis (simple comparison) denotes the least proactive technologies, such as signature detection: such technologies are only able to detect known malicious programs. The more complex an analytical system is, the more proactive it is. Proactivity is directly linked to how frequently updating needs to be conducted. For example, signature databases have to be updated frequently; more complex heuristic systems remain effective for longer, and expert analytical systems can function successfully for months without an update.
The false positive rate is also directly related to the complexity of a technology’s analytical component. If malicious code is detected using a precisely defined signature or sequence of actions, as long as the signature (be it byte, behavioral or other) is sufficiently long, identification will be absolute. The signature will only detect a specific piece of malware, and not other malicious programs. The more programs a detection algorithm attempts to identify, the less clear it becomes, and the probability of detecting non-malicious programs increases as a result.
The level of user involvement is the extent to which a user needs to participate in defining protection policies: creating rules, exceptions and black and white lists. It also reflects the extent to which the user participates in the process of issuing verdicts by confirming or rejecting the suspicions of the analytical system. The level of user involvement depends on the implementation, but as a general rule the further analysis is from a simple comparison, the more false positives there will be that require correction. And correcting false positives requires user input.
In conclusion, the more complex the analytical system, the more powerful the antivirus protection is. However, increased complexity means an increased number of false positives, which can be compensated for by greater user input.
The model described above theoretically makes it easier to evaluate the pros and cons of any technology. Let’s take the example of an emulator with a complex analytical component. This form of protection is very secure (as it does not require the file being scanned to be launched) but a certain percentage of malicious programs will go undetected, either due to anti-emulator tactics used by the malicious code or due to inevitable bugs in the emulator itself. However, this type of protection has great potential and if carefully implemented will detect a high percentage of unknown malicious programs, albeit slowly.

The evolution of PC Virus 5

Real technologies at work
Let’s now examine exactly which algorithms are used in which malware detection technologies.
Typically, manufacturers give new names to the new technologies they develop (Proactive Protection in Kaspersky Anti-Virus, TruPrevent from Panda, and DeepGuard from F-Secure). This is good as it means that individual technologies will not automatically be pigeon-holed in narrow technical categories. Nevertheless, using more general terms such as “heuristic,” “emulation,” “sandbox,” and "behaviour blocker” is unavoidable when attempting to describe technologies in an accessible, relatively non-technical way.
This is where the tangled web of terminology begins. These terms do not have clear-cut meanings (ideally, there would be one clear definition for each term). One person may interpret a term in a completely different way from someone else. Furthermore, the definitions used by the authors of so-called "accessible descriptions" are often very different from the meanings used by professionals. This explains the fact that descriptions of technologies on developer websites may be crammed with technical terminology while not actually describing how the technology works or giving any relevant information about it.
For example, some antivirus software manufacturers say their products are equipped with HIPS, proactive technology or nonsignature technology. A user may understand “HIPS” as being a monitor that analyzes system events for malicious code, and this may not be correct. This description could mean almost anything e.g. that an emulator engine is equipped with a heuristic analysis system (see below). This kind of situation arises even more often when a solution is described as heuristic without giving any other details.
This is not to say that developers are trying to deceive clients. It’s likely that whoever prepares the description of technologies has simply got the terms confused. This means that descriptions of technologies prepared for end users may not accurately describe how the technology works, and that clients should be cautious if using descriptions when selecting a security solution.
Now let’s take a look at the most common terms in antivirus technologies .
There are few variations in the meanings of signature detection: from a technical perspective, it means working with file byte code, and from an analytical point of view, it is a primitive means of processing data, usually by using simple comparison. This is the oldest technology, but it is also the most reliable. That’s why despite the considerable costs incurred in keeping databases up to date, this technology is still used today in all antivirus software.
There aren't many possible interpretations of the terms emulator or sandbox, either. In this type of technology the analytical component can be an algorithm of any complexity, ranging from simple comparison to expert systems.
The term heuristic is less transparent. According to Ozhegova-Shvedovaya, the definitive Russian dictionary, "heuristics is a combination of research methods capable of detecting what was previously unknown." Heuristics are first and foremost a type of analytical component in protection software, but not a clearly defined technology. Outside a specific context, in terms of problem-solving, it closely resembles an “unclear” method used to resolve an unclear task.
When antivirus technologies first began to emerge - which was when the term heuristic was first used - the term meant a distinct technology: one that would identify a virus using several flexibly assigned byte templates, i.e. a system with a technical component, (e.g. working with files), and an analytical component (using complex comparison). Today the term heuristic is usually used in a wider sense to denote technology that is used to search for unknown malicious programs. In other words, when speaking about heuristic detection, developers are referring to a protection system with an analytical component that uses a fuzzy search to find a solution (this could correspond to an analytical component which uses either complex analysis or an expert system. The technological foundation of the protection software i.e. the method it uses to gather data for subsequent analysis can range from simply working with files up to working with events or the status of the operating system.
Behavioral detection and proactive detection are terms which are even less clearly defined. They can refer to a wide variety of technologies, ranging from heuristics to system event monitoring.
The term HIPS is frequently used in descriptions of antivirus technologies, but not always appropriately. Despite the fact that the acronym stands for Host Intrusion Prevention System, this does not reflect the essential nature of the technology in terms of antivirus protection. In this context, the technology is very clearly defined: HIPS is a type of protection which from a technical point of view is based on monitoring system events. The analytical component of the protection software may be of any type, ranging from coinciding separate suspicious events to complex analysis of a sequence of program actions. When used to describe an antivirus product, HIPS may be used to denote a variety of things: primitive protection for a small number of registry keys, a system that provides notification of attempts to access certain directories, a more complex system that analyzes program behaviour or even another type of technology that uses system event monitoring as its basis.

Real technologies at work

Real technologies at work
Let’s now examine exactly which algorithms are used in which malware detection technologies.
Typically, manufacturers give new names to the new technologies they develop (Proactive Protection in Kaspersky Anti-Virus, TruPrevent from Panda, and DeepGuard from F-Secure). This is good as it means that individual technologies will not automatically be pigeon-holed in narrow technical categories. Nevertheless, using more general terms such as “heuristic,” “emulation,” “sandbox,” and "behaviour blocker” is unavoidable when attempting to describe technologies in an accessible, relatively non-technical way.
This is where the tangled web of terminology begins. These terms do not have clear-cut meanings (ideally, there would be one clear definition for each term). One person may interpret a term in a completely different way from someone else. Furthermore, the definitions used by the authors of so-called "accessible descriptions" are often very different from the meanings used by professionals. This explains the fact that descriptions of technologies on developer websites may be crammed with technical terminology while not actually describing how the technology works or giving any relevant information about it.
For example, some antivirus software manufacturers say their products are equipped with HIPS, proactive technology or nonsignature technology. A user may understand “HIPS” as being a monitor that analyzes system events for malicious code, and this may not be correct. This description could mean almost anything e.g. that an emulator engine is equipped with a heuristic analysis system (see below). This kind of situation arises even more often when a solution is described as heuristic without giving any other details.
This is not to say that developers are trying to deceive clients. It’s likely that whoever prepares the description of technologies has simply got the terms confused. This means that descriptions of technologies prepared for end users may not accurately describe how the technology works, and that clients should be cautious if using descriptions when selecting a security solution.
Now let’s take a look at the most common terms in antivirus technologies There are few variations in the meanings of signature detection: from a technical perspective, it means working with file byte code, and from an analytical point of view, it is a primitive means of processing data, usually by using simple comparison. This is the oldest technology, but it is also the most reliable. That’s why despite the considerable costs incurred in keeping databases up to date, this technology is still used today in all antivirus software.
There aren't many possible interpretations of the terms emulator or sandbox, either. In this type of technology the analytical component can be an algorithm of any complexity, ranging from simple comparison to expert systems.
The term heuristic is less transparent. According to Ozhegova-Shvedovaya, the definitive Russian dictionary, "heuristics is a combination of research methods capable of detecting what was previously unknown." Heuristics are first and foremost a type of analytical component in protection software, but not a clearly defined technology. Outside a specific context, in terms of problem-solving, it closely resembles an “unclear” method used to resolve an unclear task.
When antivirus technologies first began to emerge - which was when the term heuristic was first used - the term meant a distinct technology: one that would identify a virus using several flexibly assigned byte templates, i.e. a system with a technical component, (e.g. working with files), and an analytical component (using complex comparison). Today the term heuristic is usually used in a wider sense to denote technology that is used to search for unknown malicious programs. In other words, when speaking about heuristic detection, developers are referring to a protection system with an analytical component that uses a fuzzy search to find a solution (this could correspond to an analytical component which uses either complex analysis or an expert systemThe technological foundation of the protection software i.e. the method it uses to gather data for subsequent analysis can range from simply working with files up to working with events or the status of the operating system.
Behavioral detection and proactive detection are terms which are even less clearly defined. They can refer to a wide variety of technologies, ranging from heuristics to system event monitoring.
The term HIPS is frequently used in descriptions of antivirus technologies, but not always appropriately. Despite the fact that the acronym stands for Host Intrusion Prevention System, this does not reflect the essential nature of the technology in terms of antivirus protection. In this context, the technology is very clearly defined: HIPS is a type of protection which from a technical point of view is based on monitoring system events. The analytical component of the protection software may be of any type, ranging from coinciding separate suspicious events to complex analysis of a sequence of program actions. When used to describe an antivirus product, HIPS may be used to denote a variety of things: primitive protection for a small number of registry keys, a system that provides notification of attempts to access certain directories, a more complex system that analyzes program behaviour or even another type of technology that uses system event monitoring as its basis.

The evolution of PC Virus 4

The analytical component
The degree of sophistication of decision-making algorithms varies. Roughly speaking, decision-making algorithms can be divided into three different categories, although there are many variants that fall between these three categories.
Simple comparison
In this category, a verdict is issued based on the comparison of a single object with an available sample. The result of the comparison is binary (i.e. "yes" or "no"). One example is identifying malicious code using a strict byte sequence. Another higher level example is identifying a suspicious program behavior by a single action taken by that program (such as creating a record in a critical section of the system registry or the autorun folder).
Complex comparison
In this case a verdict is issued based on the comparison of one or several objects with corresponding samples. The templates for these comparisons can be flexible and the results will be probability based. An example of this is identifying malicious code by using several byte signatures, each of which is non-rigid (i.e. individual bytes are not determined). Another higher level example is identifying malicious code by API functions which are called non-sequentially by the malicious code with certain parameters.
Expert systems
In this category, a verdict is issued after a sophisticated analysis or data. An expert system may include elements of artificial intelligence. One example is identifying malicious code not by a strict set of parameters, but by the results of a multifaceted assessment of all of its parameters at once, taking into account the ‘potentially malicious’ weighting of each parameter and calculating the overall result

The evolution of PC Virus 3

The technical component
The technical component of a malware detection system collects data that will be used to analyze the situation.
On one hand, a malicious program is a file containing specific content. On the other hand, it is a collection of actions that take place within an operating system. It is also the sum total of final effects within an operating system. This is why program identification can take place at more than one level: by byte sequence, by action, by the program’s influence on an operating system, etc.
The following are all ways that can be used to collect data for identifying malicious programs:
treating a file as a mass of bytes
emulating1 the program code
launching the program in a sandbox2 (and using other similar virtualization technologies)
monitoring system events
scanning for system anomalies
These methods are listed in terms of increased abstraction when working with code. The level of abstraction here means the way in which the program being run is regarded: as an original digital object (a collection of bytes), as a behaviour (more abstract than the collection of bytes) or as a collection of effects within an operating system (more abstract than the behaviour). Antivirus technology has, more or less, evolved along these lines: working with files, working with events via a file, working with a file via events, and working with the environment itself. This is why the list above naturally illustrates chronology as well as methods.
It should be stressed that the methods listed above are not so much separate technologies as they are theoretical stages in the continuing evolution of technologies used to collect data which is used to detect malicious programs. Technologies gradually evolve and intersect with one another. For example, emulation may be closer to point 1 in the list if it is implemented in such a way that only partially handles a file as a mass of bytes. Or it may be closer to point (3) if we are talking about full virtualization of system functions.
The methods are examined in more detail below.
Scanning files
The very first antivirus programs analyzed file code which was treated as byte sequences. Actually, "analyze" is probably not the best term to use, as this method was a simple comparison of byte sequences against known signatures. However, here we are interested in the technical aspect of this technology, namely getting data as part of the search for malicious programs. This data is transmitted to the decision-making component, extracted from files and is a mass of bytes structured in a particular way.
A typical feature of this method is that the antivirus works only with the source byte code of a program and does not take program behaviour into account. Despite the fact that this method is relatively old, it is not out of date, and is used in one way or another by all modern antivirus software - just not as the sole or even as the main method, but as a complement to other technologies.
Emulation
Emulation technology is an intermediary stage between processing a program as a collection of bytes and processing a program as a particular sequence of actions.
An emulator breaks down a program's byte code into commands, and then launches each commend in a virtual environment which is a copy of the computer environment. This allows security solutions to observe program behavior without any threat being posed to the operating system or user data (which would inevitably happen if the program was run in the real, i.e. non-virtual environment).
An emulator is an intermediary step in terms of levels of abstraction in working with a program. Roughly speaking, we can say that while an emulator still works with a file, it does analyze events. Emulators are used in many (possibly even all) major antivirus products, usually either as an addition to a core, lower-level file engine or as insurance for a higher-level engine (such as a sandbox or system monitoring).
Virtualization: the sandbox
Virtualization as it is used in so-called sandboxes is a logical extension of emulation. The sandbox works with programs that are run in a real environment but the environment is strictly controlled. The name sandbox itself provides a relatively accurate picture of how the technology works. You have an enclosed space in which a child can play safely. In the context of information security, the operating system is the world, and the malicious program is the rambunctious child. The restrictions placed on the child are a set of rules for interaction with the operating system. These rules may include a ban on modifying the operating system's directory, or restricting work with the file system by partially emulating it. For example, a program that is launched in a sandbox may be fed a virtual copy of a system directory so that modifications made to the directory by the program under investigation do not impact the way the operating system works. Any point of contact between the program and its environment (such as the file system and system registry) can be virtualized in this way.
The line between emulation and virtualization may be a fine one, but it is a clear one. The first technology is an environment in which a program is run (and fully contained and controlled as it runs). The latter uses the operating system as the environment, and the technology merely controls the interaction between the operating system and the program. Unlike emulation, in virtualization the environment is on separate but equal footing with the technology.
Protection using the kind of virtualization described above doesn’t work with the files, but with program behavior – and it doesn’t work the system itself.
Sandboxing - like emulation - isn’t used extensively in antivirus products, mainly because it requires a large amount of resources. It's easy to tell when an antivirus program uses a sandbox, because there will always be a time delay between when the program is launched and when it actually starts to run (or, if a malicious program is detected, there will be a delay between the program's launch and the notification announcing a positive detection). At the moment, sandbox engines are used in only a handful of antivirus products. However, a great deal of research is currently being done into hardware virtualization, which may lead to this situation changing in the near future.
Monitoring system events
Monitoring system events is a more abstract method of collecting data which can be used to detect malicious programs. An emulator or sandbox observes each program separately; monitoring technology observes all programs simultaneously by registering all operating system events created by running programs.
Data is collected by intercepting operating system functions. By intercepting the call to a certain system function, information can be obtained about exactly what a certain program is doing in the system. Over time, the monitor collects statistics on these actions and transfers them to the analytical component for analysis.
This technology is currently the most rapidly evolving technology. It is used as a component in several major antivirus products and as the main component in individual system monitoring utilities (called HIPS utilities, or simply HIPS - these include Prevx, CyberHawk and a number of others). However, given that it’s possible to get around any form of protection, this malware detection method is not exactly the most promising: once a program is launched in a real environment, the risks considerably reduce the effectiveness of the protection.
Scanning for system anomalies
This is the most abstract method used to collect data about a possibly infected system. It is included here as it is a logical extension of other methods, and because it demonstrates the highest level of abstraction among the technologies examined in this article.
This method makes use of the following features:
an operating system, together with the programs running within that system, is an integrated system;
the operating system has an intrinsic “system status”;
if malicious code is run in the environment, then the system will have an “unhealthy" status; this differs from a system with a "healthy" status, in which there is no malicious code.
These features help determine a system's status (and, consequently, whether or not malicious code is present in the system) by comparing the status to a standard or by analyzing all of the system’s individual parameters as a single entity.
In order to detect malicious code effectively using this method, a relatively complex analytical system (such as an expert system or neural network) is required. Many questions arise: what is the definition of “healthy” status? How does it differ from “unhealthy” status? Which discrete parameters can be tracked? How should these parameters be analyzed? Due to its complexity, this technology is still underdeveloped. Signs of its initial stages can be seen in some anti-rootkit utilities, where it makes comparisons with certain system samples taken from a standard (obsolete utilities such as PatchFinder and Kaspersky Inspector), or certain individual parameters (GMER, Rootkit Unhooker).
An interesting metaphor
The analogy of the child which is used in the section on sandboxing can be extended. For example: an emulator is like a nanny that continually watches over a child to make sure s/he doesn’t do anything undesirable. System event monitoring is like a kindergarten teacher who supervises an entire group of children, and system anomaly detection can be compared to giving children full rein while keeping a record of their grades. And in terms of this metaphor, file byte analysis is like family planning, or more precisely, looking for the "twinkle" in a prospective parent's eye.
And just like children, these technologies are developing all the time.

The evolution of PC Virus 2

Malware defense systems: a model
Let‘s start by examining how malware detection technologies work using the following model.

Any protection technology can be separated into two components: a technical component and an analytical component. Although these components may not be clearly separate at a module or algorithm level, in terms of function they do differ from each other.


The technical component is a collection of program functions and algorithms that provide data to be analyzed by the analytical component. This data may be file byte code, text strings within a file, a discrete action of a program running within the operating system or a full sequence of such actions.

The analytical component acts as a decision-making system. It consists of an algorithm that analyzes data and then issues a verdict about the data. An antivirus program (or other security software) then acts in accordance with this verdict in line with the program’s security policy: notifying the user, requesting further instructions, placing a file in quarantine, blocking unauthorized program actions, etc.

As an example, let’s use this model to examine classic methods based on signature detection. A system that gets data about the file system, files and file contents acts as the technical component. The analytical component is a simple operation that compares byte sequences. Broadly speaking, the file code is input for the analytical component; the output is a verdict on whether or not that file is malicious.

When using the model above any protection system can be viewed as a complex number - something that connects two separate constituents i.e. the technical and analytical components. Analyzing technologies in this way makes it easy to see how the components relate to one another and their pluses and minuses. In particular, using this model makes it easier to get to the bottom of how certain technologies work. For example, this article will discuss how heuristics as a method for decision-making are simply one type of analytical component, rather than a truly independent technology. And it will consider HIPS (Host Intrusion Prevention System) as just a type of technical component, a way to collect data. These terms do not contradict one another, and they also do not fully characterize the technology that they are used to describe: we can discuss heuristics without specifying exactly what data is undergoing heuristic analysis, and we can talk about an HIPS system without knowing anything about the principles that guide the system in issuing verdicts.

These technologies will be discussed in more detail in their respective sections. Let’s first examine the principles at the heart of any technology used to search for malicious code: technical (methods for gathering data) and analytical (methods for processing the collected data).

The evolution of PC Virus

Malware defense systems: a model
Let‘s start by examining how malware detection technologies work using the following model.

Any protection technology can be separated into two components: a technical component and an analytical component. Although these components may not be clearly separate at a module or algorithm level, in terms of function they do differ from each other.

The technical component is a collection of program functions and algorithms that provide data to be analyzed by the analytical component. This data may be file byte code, text strings within a file, a discrete action of a program running within the operating system or a full sequence of such actions.

The analytical component acts as a decision-making system. It consists of an algorithm that analyzes data and then issues a verdict about the data. An antivirus program (or other security software) then acts in accordance with this verdict in line with the program’s security policy: notifying the user, requesting further instructions, placing a file in quarantine, blocking unauthorized program actions, etc.

As an example, let’s use this model to examine classic methods based on signature detection. A system that gets data about the file system, files and file contents acts as the technical component. The analytical component is a simple operation that compares byte sequences. Broadly speaking, the file code is input for the analytical component; the output is a verdict on whether or not that file is malicious.

When using the model above any protection system can be viewed as a complex number - something that connects two separate constituents i.e. the technical and analytical components. Analyzing technologies in this way makes it easy to see how the components relate to one another and their pluses and minuses. In particular, using this model makes it easier to get to the bottom of how certain technologies work. For example, this article will discuss how heuristics as a method for decision-making are simply one type of analytical component, rather than a truly independent technology. And it will consider HIPS (Host Intrusion Prevention System) as just a type of technical component, a way to collect data. These terms do not contradict one another, and they also do not fully characterize the technology that they are used to describe: we can discuss heuristics without specifying exactly what data is undergoing heuristic analysis, and we can talk about an HIPS system without knowing anything about the principles that guide the system in issuing verdicts.

These technologies will be discussed in more detail in their respective sections. Let’s first examine the principles at the heart of any technology used to search for malicious code: technical (methods for gathering data) and analytical (methods for processing the collected data).

The evolution of PC Virus 1

This article provides an overview of methods used to detect malicious code; of the functional (and to some extent chronological) connections between these methods; and of their technological and applied features. Many of the technologies and principles covered in this article are still current today, not only in the antivirus world, but also in the wider context of computer security systems. However, some of the technologies used by the antivirus industry – such as unpacking packed programs and streaming signature detection – are beyond the scope of this article.
The first malware detection technology was based on signatures: segments of code that act as unique identifiers for individual malicious programs. As viruses have evolved, the technologies used to detect them have also become more complex. Advanced technologies (heuristics and behaviour analyzers) can collectively be referred to as ‘nonsignature’ detection methods.
Although the title of this article implies that the entire spectrum of malware detection technologies is covered, it primarily focuses on nonsignature technologies; this is because signatures are primitive and repetitive and there is little to discuss. Furthermore, while signature scanning is widely understood, most users do not have a solid understanding of nonsignature technologies. This article explains the meanings of terms such as "heuristic," "proactive detection," "behavioral detection" and "HIPS", examines how these technologies relate to each another and their advantages and drawbacks. This article, like our previously published The evolution of self-defense technologies in malware, aims to systemize and objectively examine certain issues relating to malicious code and defending systems against malicious programs. Articles in this series are designed for readers who have a basic understanding of antivirus technologies, but who are not experts in the field.

Two Supercomputers


Two Supercomputers Are Better Than OneIf Argonne's combined Blue Gene/P machines were running today, they would easily be the world's fastest. Prices for such a computer range from $50 million to $150 million, depending on configuration, an IBM spokesperson said. While some scientists will use the computers for their research, most of the new computing capacity will be used by labs across the country.
Vendor White Papers – Featured ListingsECT News Network's directory of e-business, IT and CRM white papers provides resources you need to make informed purchasing decisions. Browse Listings.
Argonne National Laboratory, which serves as a lending for scientific computing, is doubling down on supercomputers, installing two of the most advanced units IBM (NYSE: IBM) makes and linking them together to work as one.
In a deal announced Thursday, Argonne will get a computer from IBM that performs 445 trillion calculations per second, the 445 teraflop Blue Gene/P system. Put in human terms, every one of the 6 billion people on Earth would need to perform 70,000 calculations a second to match this supercomputer .
Researchers at companies like Pratt & Whitney and Procter & Gamble, as well as major universities, use Argonne's computing power to solve problems that are inaccessible to ordinary computers. For example, P&G researchers use computer time to simulate the molecular basis of bubble formation to develop better detergents and also improved fire-control chemicals and environmentally-friendly consumer products.
Sum of the Parts Argonne is already installing a new Blue Gene/P that is slower than the 445 teraflop model due for installation next year. When the two are combined, they will operate at 556 teraflops. The lab also operates an older Blue Gene/L model that will continue to run separately at 5.7 teraflops.
"By the time this project is complete, Argonne will be home to one of our country's preeminent computing facilities," said Rick Stevens, associate laboratory director for computing and life sciences.
Indeed, if Argonne's combined Blue Gene/P machines were running today, they would easily be the world's fastest. Prices for such a computer range from $50 million to $150 million, depending on configuration, an IBM spokesperson said. Specific terms of Argonne's deal weren't disclosed.
While some scientists will use the computers for their research, most of the new computing capacity will be used by labs across the country. High-speed connections enable researchers working hundreds of miles from Argonne to run programs as if they were next door.
Jet-Engine Science Pratt & Whitney researchers have used 750,000 hours' worth of computer time this year to simulate conditions inside a jet engine combustor where fuel and oxygen combine. Their goal is to reduce jet engine emissions by 55 percent.
"Modern combustors couldn't be designed without their computer tools," said Peter Bradley, a Pratt & Whitney computer scientist. Argonne not only provides computer time to the company, but also supplies advice and expertise to help use the supercomputer for basic research, he said.
"We don't use the supercomputer to design engines," Bradley said. "We use it to study the science and physics that provides us with tools we use in designing engines."
The U.S. Department of Energy, which funds Argonne and oversees the program that doles out computer time from Argonne and other national labs, at first focused on academic research, but recently has expanded to include industrial scientists, said Herb Schultz, IBM's supercomputer division marketing manager.
"We like that because it exposes the systems to more applications," said Schultz. "It shows how much more science can be done. A lot of times, a supercomputer comes on the market and people think it's just academic and esoteric. We want to get more people using them."
Software Development Part of IBM's deal with Argonne includes a collaboration to develop more open source software for Blue Gene machines to expand the applications available. Argonne computer scientists will also provide feedback to IBM to help in designing future machines.
"Very large machines have unique challenges in making them operate efficiently," said Ray Bair, director of the Argonne Leadership Computing Facility that houses the Blue Genes.
Most work done using Blue Gene machines are simulations intended to provide insights into how fundamental processes such as formation of soap bubbles or combustion of jet fuel works, Bair said.
"The vast majority of very large problems are in the domain that requires a supercomputer," he said.
IBM and Google (Nasdaq: GOOG) agreed last month to work with a group of university scientists to expand "cloud computing," where large numbers of server computers will handle tasks presented by millions of users. This need grows out of the popularity of Facebook , MySpace and other social computing platforms.
"Programmers have usually been taught to write for a single computer or a few," said Dennis Quan, chief technology officer of IBM's high performance computing and software group. "They're not taught to write for tens of thousands of machines. But levels of parallelism and complexity are advancing to where in a few short years, this will be very mainstream.

Silicon Brains


Silicon BrainsComputer chips designed to mimic how the brain works could shed light on our cognitive abilities.
By Emily Singer
Registration is now required to read magazine articles from Technology Review.
New! Discover the future of technology by registering today at my.technologyreview.com. Registration is FREE and the benefits are extensive! Registration allows exclusive access to years and years of online archives as well as stories appearing in the current issue of Technology Review.
For over 108 years, Technology Review has been the first to write about new technologies and explain why they matter. It’s the authority on the future of technology. Stories are written by some of the best-known, most informed technology writers – experts who write clearly and intelligently. You’ll read in-depth features that investigate how new technologies work – and stay ahead of the technology curve.

Samsung's New Solid-State Drives


Samsung's New Solid-State Drives Target Enterprise ServersSamsung unveiled super-fast solid state drives that use the high-speed SATA II interface. The South Korean firm is producing sample 1.8-inch and 2.5-inch drives that offer a boost in performance over existing drives. The new drives have a sequential write speed of 100 MB per second and sequential read speed of 120 MB per second.
Vendor White Papers – Featured ListingsECT News Network's directory of e-business, IT and CRM white papers provides resources you need to make informed purchasing decisions. Browse Listings.
Samsung has begun sampling new 1.8-inch and 2.5-inch 64 GB solid state drives (SSDs) aimed at enterprise servers and high-end PCs, the company announced Sunday.
Using a super-fast SATA (Serial Advanced Technology Attachment) II/native SATA interface, the drives offer a sequential write speed of 100 MB per second and a sequential read speed of 120 MB per second.
"The 64 GB SATA II SSD is based on Samsung's cutting-edge NAND technology with dramatically improved performance specs that are taking system performance to a whole new level of efficiency," said Jim Elliott, director of NAND flash marketing for Samsung Semiconductor.
Company officials could not be reached for information on expected pricing or timing of availability for the drives.
Twice the Speed The new drives combine a 50 nm-class, single-level-cell (SLC) 8 Gb flash chip with a proprietary, high-speed SATA controller and supporting software. Their interface speed is 3 Gb per second -- twice that of Samsung's SATA I predecessor, the company said.
The SATA II drives also require only half as much power as do hard drives typically used in notebook PCs, which generally consume 1.9 watts, and one-tenth the power consumed by enterprise-class 15,000 rpm hard drives in servers, Samsung said.
Samsung's 32 GB PATA (Parallel ATA) solid state drive debuted in March 2006, followed by a 64 GB version using the SATA I interface in March of this year.
The market for solid state drives based on NAND flash memory is expected to show 74 percent compounded annual growth over the next five years, reaching US$10 billion in 2012, Samsung said, citing data from Web-Feet Research.
Maturing Market Indeed, the market for solid state drives will continue to mature over the next couple of years, Jeff Janukowicz, research manager for SSD and HDD (hard disk drive) components at IDC, told TechNewsWorld.
"Our view is that solid state drives like these are a glimpse at the future in terms of what people can expect in the laptop and server world," Janukowicz said. "Solid state drives offer a lot of advantages over traditional hard drives, which ultimately they're aiming to replace."
For instance, compared with traditional hard drives, which are based on a mechanical system, solid state drives are faster and offer reduced power consumption, he noted.
"In applications where you want a faster response, like for booting up the operating system or opening files, solid state drives have a real advantage," he said.
Higher Prices Capacity tends to be somewhat lower on SSDs, but their main downside is their price, which remains relatively high, he added.
"I think it's certainly an attractive solution, particularly for people looking for higher performance out of their servers," Janukowicz said.
SanDisk is Samsung's principal competitor in the market for SSDs, but because of Samsung's dominance in NAND flash media, it has an inherent advantage, he added.
"The NAND flash media is where a lot of the cost in these drives lies," Janukowicz said.
Catching Up Price is definitely a factor, and 64 GB is "not very big," Roger Kay, president of Endpoint Technologies, told TechNewsWorld. "Today, hard drives are typically 80 GB, and much bigger is available, reaching up to 160 GB, 200 GB or even beyond."
As a result, users won't likely pay the higher prices for SSDs unless they really need their improved durability, speed and power consumption, he said.
The 1.8- and 2.5-inch form factors of Samsung's new offerings are also more typical of mobile sizes, he added.
Looking forward, solid-state technology still has a ways to go before it will replace traditional hard drives, Kay predicted.
"Solid state drives are not really direct replacements for spinning media, but they're getting there," he said. "It's a question of when they will be comparable from a price perspective."

Raising Consciousness


Raising Consciousness
Some seemingly unconscious patients have startlingly complex brain activity. What does that mean about their potential for recovery? And what can it tell us about the nature of consciousness?
In 2003, 39-year-old Terry Wallis uttered his first word ("mom") in the 19 years since a car accident had left him with severe brain damage. He had spent much of the previous two decades in what neurologists call a minimally conscious state, somewhere in the gray area between coma and consciousness. In the years before his awakening, however, Wallis's family had noticed that he was growing more alert and responsive, occasionally nodding, grunting, or even crying, until one day he spontaneously started to speak. Though Wallis still has serious impairments in memory and movement, he continues to make remarkable gains.
No one knows what spurred Wallis's return to the waking world. But neurologist Nicholas Schiff is determined to find out. A researcher at the Weill Cornell Medical College in New York City, Schiff is one of a handful of scientists studying people like Wallis, patients who spend months or years seemingly unaware of the outside world and unable to communicate. Using new brain-imaging techniques, Schiff is hoping to better understand the complex nature of consciousness--and find ways to help treat the thousands of patients who suffer from severe consciousness disorders.
Eight months after Wallis's first words, Schiff and his collaborators began taking snapshots of Wallis's brain using a new method that can create detailed maps of the brain's nerve fibers. What they found surprised them. Over the next year and a half, the researchers' images seemed to show that Wallis's brain had partially healed itself. But how? And what triggered the healing process?
In the last few decades, improved medical technologies have kept more people alive after brain injuries, but many of them have been left in apparently permanent states of impaired consciousness. Immediately after a severe brain injury, a patient often enters a coma--a period of unconsciousness that typically lasts days or, at most, weeks. Those who survive do not necessarily awaken; instead, they may enter a vegetative state or a minimally conscious state (MCS), which can last for years. While it's difficult to determine how many minimally conscious patients there are in the United States (MCS was introduced as a diagnostic category only in 2002), some estimates put the number at 25,000 or more--about 10 times the number of vegetative patients. (The two conditions can be difficult even for neurologists to distinguish. Vegetative patients are defined as those totally unaware of their environment, while patients who are in a minimally conscious state may occasionally laugh or cry, reach for objects, or even respond to simple questions.)
Unlike Wallis, most patients who spend years in a minimally conscious state never wake up. Prospects for recovery diminish as time ticks by, so many doctors adopt a sort of therapeutic nihilism toward those who are persistently unconscious, assuming that their cases are hopeless, says Steven Laureys, a neurologist at the University of Liège in Belgium. MCS patients have few treatment options, and most don't get rigorous long-term follow-up or intensive rehabilitation. Wallis, for example, had no neurologist, and not much medical history was collected during his 19-year hiatus from consciousness.

A New Nanogenerator


A New NanogeneratorResearchers are experimenting with a novel nanowire material to power tiny biosensors and portable devices. By Prachi Patel-PreddWireless biosensors that monitor pathogens in water and measure blood pressure or cancer biomarkers in the body are shrinking to nanometer dimensions. To operate them, researchers are looking for equally small power sources. Nanowires that convert mechanical energy into electricity are a promising technology.
Now researchers at the University of Illinois at Urbana-Champaign (UIUC) have taken the first step toward building a nanogenerator out of barium titanate. So far, efforts to make nanogenerators have focused on zinc-oxide nanowires. But barium titanate could lead to better generators because it shows a stronger piezoelectric effect, says mechanical-science and engineering professor Min-Feng Yu, who is leading the research at UIUCexperiments show that a barium-titanate nanowire can generate 16 times as much electricity as a zinc-oxide nanowire from the same amount of mechanical vibrations, he says.
Nanogenerators could lead to many advances: biomedical sensors powered by blood flow or muscle contractions, tiny gas sensors that run on wind or acoustic waves, pathogen monitors powered by water flow, and portable electronics that are hooked up to nanowires in shoes. "The nanogenerator idea has become more and more convincing, " says Yi Cui, materials-science and engineering professor at Stanford University. "It's an idea that might work."
In 2006, a team of researchers led by Zhong Lin Wang of the Georgia Institute of Technology first showed that zinc-oxide nanowires could harvest mechanical energy to generate electricity. Wang's group has since made a lot of progress, most recently demonstrating a zinc-oxide nanowire array that outputs direct current in response to ultrasonic vibrations. (See "Nanogenerator Fueled by Vibrations.")
The UIUC team is the first to use barium titanate. In an online Nano Letters paper, Yu and his colleagues show that applying vibrations to a single barium-titanate nanowire leads to a small energy output. In their experiment, the researchers bridge a nanowire across a gap on a substrate, keeping one end stationary and moving the other end. The output energy is extremely small--about 0.3 attojoules--but for the same setup, a zinc-oxide nanowire gives 16 times lesssmaller energy output, Yu says.
Xudong Wang, a researcher in Zhong Lin Wang's (no relation) group and a 2007 TR35 winner, is happy to see progress on using materials other than zinc oxide to make nanogenerators. He says that the results look promising. The biggest advantage with using barium titanate, he feels, is that "it is possible to generate higher voltages than zinc oxide. This is very important for a power source."
But zinc oxide has its own advantages. It is nontoxic to biological systems, so it might be better suited than barium titanate for implantable devices. Also, it is easier to control zinc-oxide growth in order to fabricate nanowire arrays. "To make an applicable device, you need to have many nanowires with the same orientation in the same location," Xudong Wang says. That could be hard to achieve with barium titanate.
Yu acknowledges the difficulties with growing barium-titanate nanowires. His and his colleagues' work is preliminary at this point, he says, but it already shows the potential for making more-efficient, higher-output nanogenerators. As for Cui, he says that barium-titanate nanogenerators might be feasible, but he cautions that "in terms of making a working device, certainly there's still a way to go."

HP, MS Aim to Tidy Up Digital


HP, MS Aim to Tidy Up Digital Clutter With New Home Server
Homes bursting with digital data will be the prime targets of Microsoft and HP when the duo's MediaSmart Server running Windows Home Server ships later this month. The device is designed to present users with a relatively easy way to share, organize and back up their personal media data. The word "server" in the name, however, may be a turn-off to consumers with less tech know-how.
Leading entrepreneurs like Hugo Burge of hereorthere.com recognize that: "The right domain name is the difference between success and failure. On the Internet your domain name is your real estate." Thousands of business owners like Burge rely on BuyDomains.com to secure high-performance domains.
Microsoft (Nasdaq: MSFT) and HP (NYSE: HPQ) announced the availability of the HP MediaSmart Server running Windows Home Server. The new class of server enables consumers to protect, connect and share digital media and documents.
Aimed at consumers, the MediaSmart Server (MSS) is the flagship product in what Microsoft said is an entirely new category of consumer products.
"Digital devices and content are everywhere in our day-to-day lives, and they are important all the time," said Bill Gates, chairman of Microsoft. "With the launch of Windows Home Server, Microsoft and its partners are creating a new consumer product category that will help people keep their digital media safe and make it easier for them to enjoy it with friends and family."
The HP MediaSmart Server will begin shipping later this month. The 500 GB version is priced at US$599. For consumers with greater storage needs, there is also a 1 TB model for $749.
Are You Being Served? Designed principally for homes with several personal computers, the MediaSmart Server provides consumers with powerful sever technology similar to that found in the workplace. It will automatically back up Windows XP-based and Vista-based PCs each night and provides a central place to organize digital documents and media. The MSS also includes a Windows Live Internet address to access the server from just about anywhere and share content with friends and family.
In addition, the server monitors the health and security status of networked PCs and can stream media to other devices throughout the home. By streaming to the Xbox 360 , for example, users can to listen to music, look at photos or watch videos on their television.
"The product will appeal mostly to people with three or more PCs who also tend to have a lot of digital media (photos, videos, music, TV shows) that they want to back up or share," Steve Kleynhans, a Gartner (NYSE: IT) analyst, told TechNewsWorld.
Unlike enterprise-class servers, however, the MSS is basically a computer with a lot of storage and a processor that is a little smaller than those found on business servers, said Roger Kay, president of Endpoint Technologies Associates.
"It's got this special job, so it is kind of a special service PC, and then the interface has been created to hide most of the complexity of the PC from you. All you see are a few simple buttons you hit to do tasks that are pretty clear. It's sort of built on top of a PC architecture," he told TechNewsWorld.
Back It Up The benefit of having a central storage location for digital memories will appeal to consumers with a bit of technological savvy under their belts, Kleynhans noted.
"Central storage for your digital memories with a pretty solid backup facility coupled with remote access and ongoing expandability [are the benefits for consumers]. It's not going to appeal to everyone, but more technically centric users will find the combination attractive," he said.
"You have to have at least two copies of everything. If something hits a snag, you've just lost everything, and the chances of that happening in two places at once are pretty rare. If you lost either your computer or this centralized storage repository you could replace one or the other of them before you lost your data," Kay echoed. "And since the data is more important than any hardware, you'd better have at least two copies."
Microsoft Inside Billed as a "stay-at-home" server, the MSS is powered by Windows Home Server (WHS) platform. Formerly known as Quattro, Microsoft introduced the new software in January at the Consumer Electronics Show held in Las Vegas. Since Quattro's unveiling, Microsoft released a software development kit to allow third-party developers to create applications that will work in conjunction with WHS.
So far, some 35 third-party vendors have built WHS add-ons, including software for personal Web sites and blogging, media streaming, online backup and storage, home security and home automation, as well as software to detect viruses and malware.
Microsoft has done a "pretty good job" developing server software that is easy to setup and operate, according to Kleynhans.
"[They] did a good job balancing capabilities with ease of use. There are a still some rough edges, and it will undoubtedly get better over time, but it's a pretty reasonable first version," he stated.
Home Server Invasion HP is just one of several leading manufacturers releasing hardware running WHS products. The Iomega (NYSE: IOM) HomeCenter Server will be available in early 2008. Also hitting stores shelves early next year is the Lifeware Lifestorage server that offers enterprise class capabilities to consumers.
Europeans can look for the MaxData Belinea o.center, an energy-efficient solution with advanced management and media functions as well as the Fujitsu Siemens Scaleo Home Server 1900, with 1TB capacity, gigabit Ethernet and advanced power management features, by the end of the year.
For those who cannot wait, there is the Tranquil PC T7-HAS Harmony Home Server. It is a small, quiet, energy-efficient solution available at 500 GB and 1 TB. The Velocity Micro Net Magix HomeServer is also small and can be situated horizontally or vertically. It starts at 1 TB of storage.
With several devices already on the market and more to come, Kleynhans thinks this new product category is headed for success.
"The real key to success will be getting users to look past the name. Calling it a 'server' automatically raises red flags for many less sophisticated users who immediately associate that term with complexity and the need for IT skills," Kleynhans explained.
"If they can get users to look beyond that and see the intelligence and automation that has been wrapped into the package, the product family should find a receptive audience -- at least with the more technically centered users. Basic external drives and even network attached storage devices, are good options, but don't match the home server in terms of capabilities," he concluded