Undergraduate Theses
Supervised by David Evans

2004

Yuan-Yao (Jeffrey) Chang, 802.11 Person-In-Middle (PiM) Attacks: Implementation and Practical Solutions, 23 March 2004. [PDF]
The IEEE 802.11 is one of the very first wireless local area network standards that is currently widely deployed. Much research has been done on the IEEE 802.11 wireless network standard and the standard is known for its insecurity. Several reports have addressed the 802.11-based network vulnerabilities mainly for its lack of authentication. This thesis project depicts a specific Internet attack called the Person-in-Middle attack. The Person-in-Middle attack is a threatening attack that in the worst situation could fully control any computer in the wireless network. By designing and recommending four solutions to the attack, the thesis project will hopefully enhance the current IEEE 802.11 security.

This thesis project also aims to improve the IEEE 802.11 standard by analyzing the security mechanisms the standard provides. The IEEE 802.11 standard provides basic security mechanisms such as the wired equivalent protocol, an encryption protocol, and the media access based access control list, which is essentially a list of legitimate clients. Attackers may easily penetrate these IEEE 802.11 basic security mechanisms.

Out of the four solutions proposed in this thesis document, the Third Party PiM Attack Detector is optimal in terms of cost and the time it takes to deploy the solution. Due to time constraints and ethical reasons, I did not perform any testing of the PiM attack solutions. Successors of this thesis project should implement the PiM attack and test out all four solutions.

Christopher Frost, Amorphous Shape Mapping, 7 May 2004. [PDF, Web Page, Code (.tar.gz)]

Research in amorphous computing studies asynchronous, identically programmed, and decentralized agents performing computations. Research in this area has produced methods for taking existing descriptions of arbitrary shapes and amorphously regrowing the approximated shape. These methods assume descriptions of shapes to be in a form accessible to traditional computers; however, using traditional computers to produce such descriptions of shapes in the physical world is a problematic and generally difficult task.

The objectives of this thesis project were to develop and analyze a method of generating a description, accessible to traditional computers, of an arbitrary two-dimensional shape by amorphously mapping the desired shape. Three interesting types of shape descriptions generatable from an amorphous shape mapping computer, connectivity, point sampling, and polygonal, are presented and discussed. With intended descriptions and uses in mind, the developed method s assumptions are stated and discussed and the primitive cell actions are given. The developed cell program s three stages are then detailed: placing the cells, mapping the region, and transferring the gathered data to a transferring computer. This thesis project's focus is mapping the given region.

Simulations of the three types of shape descriptions are described and results presented and described. Analytical results were derived for ensuring complete region description. The developed amorphous shape mapping method is able to accurately map the tested shapes using relative cell location information obtained as cells receive messages from other nearby cells. Experiments show that even cells without relative location sensing abilities can produce descriptions of the mapped shapes.

Jackson Kwok, A Wireless Protocol to Prevent Wormhole Attacks, 23 March 2004. [PDF]
As an increasing number of people are going wireless, reducing the vulnerability of wireless networks is becoming a top priority. Wireless networks are susceptible to many attacks, including an attack known as the wormhole attack. The wormhole attack is very powerful and preventing the attack has proven to be very difficult. A strategic placement of the wormhole can result in a significant breakdown in communication across a wireless network.

This project designed and developed a new protocol that prevents wormhole attacks on wireless networks. The design of this protocol is based on the use of asymmetric and symmetric key cryptography and a Global Positioning System (GPS). It was evaluated using simulations under realistic ad-hoc network settings. The simulations identified the strengths and weaknesses of this protocol under different distributions of GPS and non-GPS nodes, network areas and network structures. Within a set of requirements and assumptions, this wireless security protocol can detect nearly half of wormhole attacks by relying on each node's relative location.

Michael Peck, Improving the Usability of the ESC/Java Static Analysis Tool, 25 March 2004. [PDF]
The Extended Static Checker for Java (ESC/Java) static analysis tool examines Java source code looking for errors that could lead to runtime exceptions. ESC/Java also has the ability to check annotations placed in source code by programmers that express design decisions about how their code works. Although ESC/Java was developed for industry programmers, these features make ESC/Java a powerful tool for use to teach software engineering concepts. Unfortunately, as a command-line tool, ESC/Java is not tightly integrated with any integrated development environments (IDEs) that are regularly used by developers to write and test software. Also, ESC/Java's warning messages, which provide details about coding errors that it found, can be difficult for students (and even experienced software developers) to understand. Students may also be unsure of how to write annotations to describe their code. My project analyzes the above problems with ESC/Java and alleviates some of them by creating a plugin to run ESC/Java from within the Eclipse IDE.

Software Webpage

2003

Jonathan McCarrell McCune, Adaptability in Sensor Networks, 8 April 2003. [PDF]
Stringent energy constraints restrict the practical applications for sensor networks, as battery technology lags behind microelectronic system fabrication technology. Traditional sensor networks are built with general purpose processors (GPPs) because processing power consumption is insignificant compared to radio power consumption. As applications for sensor networks become more sophisticated, processor power utilization becomes significant. Successful sensor networks must adapt to changing conditions and requirements in order to maintain energy-efficient operation. This thesis considered a combination of two approaches to adaptability: parameterizeable algorithms and hardware implementation.

Parameterizeable algorithms allow sensor devices to tailor their operation to specific conditions and requirements. Examples of parameterizeable algorithms include JPEG image compression and most symmetric ciphers. GPPs are popular because of their ability to perform any computable function (limited only by time, energy, and memory constraints) and the ease with which different programs can be executed. Sensor networks implemented with more efficient hardware designs offer improved performance, as required levels of adaptability can be achieved on simpler and more efficient hardware.

Two sensor network applications - JPEG image compression and encryption - were analyzed to determine the impact of adaptability on fidelity and longevity. Even in applications where transmission costs dominate, such as JPEG image compression, energy savings obtained from using a more efficient processing implementation are significant. In applications where processing costs dominate, such as encryption, improvements of well over 100% in terms of network longevity can be gained by switching from GPPs to small scale reconfigurable (SSR) hardware. SSR hardware is shown to be an optimal design choice because algorithms can be implemented with efficiency approaching that of application specific integrated circuits (ASICs) while maintaining adaptability.

Ankush Seth, Scalability and Communication within Swarm Computing, March 2003. [PDF]
Consider the potential application of space exploration that uses mobile swarm devices. The feasibility of such an application depends on the ability of the devices to be able to disperse uniformly, starting from a clustered position. The disperse protocol used for such an application should also scale well with an increase in the number of devices that are present within the terrain. This research project looked at the scalability and communication aspects of the disperse primitive so as to come up protocols that will help make such applications possible.

Swarm computing is a field that involves a large number of small independent devices that communicate with each other to perform an assigned task. The disperse primitive is that property of swarm computing that enables devices to move from an initially clumped position to one that is dispersed. Communication was an important aspect since the swarm devices gathered information via wireless communication.

The three protocols developed were the random protocol, random with history protocol, and the coordinated protocol. The evaluation of the protocols showed that the coordinated protocol was the best in terms of performance measures like time, energy, and the number of steps taken to achieve the good state.

2002

Nadim Barsoum, WIL: A Tool for Web-data Extraction, March 2002. [HTML]
This report describes the design and use of a software library. The Web Integration Library (WIL) facilitates the integration of software applications with the web. The library provides functions that will search for and retrieve data from a website according to the programmer's input. The library thus provides an interface for web pages and abstracts away any lower-level HTML code interpretation. Using the library's data-retrieval power, programmers are free to focus on more critical areas of their code. The library provides a complete client-side solution to web integration that does not require any work on the server-side and that allows a program to take advantage of updates to a website.
Dev Batta, Finding a Give-And-Go In a Simulated Soccer Environment, April 2002. [PDF, Word]
A dynamic, unpredictable world complicates independent decision-making in a team of mobile robots. The number of possible actions a robot can take is vast and environmental variables are constantly changing. Therefore, a robot must make a decision quickly and efficiently. This is true in all team environments, including a soccer match. This research uses a simulated soccer match and concentrates on finding when the opportunity for a special situation known as a give-and-go exists. The focus is to break down a continuous model of the world into discrete steps that evaluate the environment. A give-and-go is a sequence of two passes between teammates that attempts to leave a defender behind the play.

The evaluation of a give-and-go is broken down into five steps: 1) Does my team have the ball? 2) Is there a teammate further up field to which the player with the ball can pass? 3) Is there a defender in between the passer and the receiver that will not be able to intercept the initial pass? 4) Is there an open area of the field to which the passer can run? 5) Can the receiver redirect the ball to the open area of the field that the passer is running without the other team intercepting? For a give-and-go to exist, these questions must be evaluated in order and the answer must be yes to each of them.

John Calandrino, Packets, Routers, and Automobiles: What Can Data Networks Teach Us About Traffic Congestion?, April 2002. [PDF]
This project applied data network congestion control and avoidance techniques to freeway systems through simulation and measured their effectiveness. Experimentation revealed that Fair Queueing at the most congested on-ramps combined with the addition of a lane to the freeway provided the most significant improvements to the freeway system. This finding is significant because it demonstrates potential for the cross-application of data network congestion control techniques to congested freeways.

Data networks and freeway systems are similar in several ways. Both systems allow for the flow of information or vehicles along limited pathways and have distinct entry points that are capable of flow rate metering. There are differences between the systems, however, that make cross-application somewhat difficult. Traffic networks are generally centralized systems where current congestion data is readily available, while large-scale data networks are decentralized, making monitoring considerably harder, and requiring additional guesswork. Additionally, data networks usually prevent data entry into the system when congestion gets particularly bad, whereas such a flow shutdown at a freeway on-ramp would be unacceptable. Despite these differences, in the simulations conducted for this project data network congestion control techniques often increased throughput and capacity and reduced travel time on freeway systems, though at the cost of increased wait times at freeway on-ramps. Solutions to this wait problem include alternate routes, parallel freeways, and toll roads pro-rated for current wait time.

It is my hope that the research described in this report will promote collaboration between two similar areas of research - data network and freeway system congestion resolution. Such sharing would greatly benefit both research areas.

Giles Cotter, Generation of Pseudorandom Numbers From Microphone Input in Computing Devices , April 2002. [PDF, Word]
Randomness is at the core of the encryption that keeps data secure in the world today. Unfortunately, it is very hard to find good sources of randomness. The aim of this project was to determine if that the computer microphone could be used as a valid source of randomness for use in various applications. The greatest challenge with developing methods of random number generation is ensuring that they are not breakable by someone with perseverance and technological know how. Therefore, the primary goal of this project was to develop a method of random number generation that is very hard, if not impossible, to predict. Second, computing devices are becoming much smaller, and are losing many of the traditional sources of randomness, such as the keyboard. Therefore, the second goal of this project was to develop a random number generation process that can be used on a wide range of computing devices.

To solve these problems, I chose to use the microphone, a feature that is now standard in almost all new PCs and many PDAs. Since each microphone will record sound slightly differently owing to mechanical differences, my hypothesis was that a microphone is a good device from which to gather randomness. The main objective of this project was to develop a method that uses the random sound input to a microphone in the production of random numbers.

Through testing, I have determined that a microphone based random number generator is feasible and that ambient sound is not suitable for the majority of entropy work. Radio static, however, produced very good entropy. Therefore, with a carefully chosen sound source, a microphone can indeed act as a very effective source of randomness.

Michael Cuvelier, Communication Aware Swarm Dispersion Primitives, March 2002. [PDF]
This project deals with a new area of Computer Science called swarm programming. Swarm programming is the coding of multiple independent units which, when working together, can perform complex behaviors. The unique thing about this type of programming is that there is no central unit or smart server or mainframe. The units communicating with each other must gain all knowledge and make all decisions. The behaviors of these agents can be broken down into small building units called primitives. This thesis focuses primarily on the dispersion primitive. The dispersion primitive guides swarm agents into a dispersed state from a any starting state. I implemented two versions of the dispersion primitive: a randomized version and an Nthreshold algorithm, which bases its behavior on a user-defined threshold of dispersive happiness.

The primitives were all implemented using a realistic communication strategy, where all information was acquired through the agents talking to each other. The primitives were simulated and results analyzed based on the trade-offs of power consumption and dispersal efficiency. The random approach was proved to be a poor choice, while the 1-threshold strategy was best overall. Other algorithms are still useful depending on where the users priorities lie.

Nicholas Dunnuck, The Effect of Emerging Artificial Intelligence Techniques on the Ethical Role of Computer Scientists, March 2002. [PDF]
Emerging technologies and programming techniques increase our ability to create intelligent software programs. With the advent of viable neural networking solutions, we have come even closer to building artificially intelligent machines. This project outlines the impact of neural networking on the development of artificial intelligence (AI) systems, explores the impact of AI systems on society, and proposes enhanced ethical and professional roles for artificial intelligence developers, with an emphasis on interpersonal communication and impact awareness.

The projections discussed here are provided both by technology experts and concerned non-experts. Computer systems will continue to get more powerful, and will become increasingly ubiquitous in the future, making the standards of development of artificial intelligence a salient topic in modern engineering. Despite a socially ingrained fear of intelligent machines, there is no governing body to oversee the continued development of AI systems.

Development of a strong artificial intelligence would surely call into question (for some) that which we define as "alive." It is yet unclear whether an electronic entity would be entitled to legal and civil rights. Furthermore, we do not know whether such an entity or race of entities would be dangerous to society. These problems indicate a strong ethical component in the development of intelligent software. This paper argues that intelligent machines will be intertwined in our future society, and addresses the lack of a concrete body to govern the development of computer software. The accompanying research further establishes that engineers will have increased ethical and political responsibilities in the development of artificial intelligence systems in the future.

David Kit Friedman, Using Splint to Detect Buffer Overflow Vulnerabilities in Sendmail , April 2002. [PDF]
Despite the intense growth of computer science research in recent decades, a simple but significant problem in security remains: buffer overflow vulnerabilities. In this paper we explore using static analysis to mitigate the problem.

Buffer overflow attacks exploit unsafe languages like C which do not check the bounds on array accesses. Evidence shows that buffer overflow vulnerabilities remain a significant problem despite the development of safer languages.

The tool we studied (Splint) attempts to detect buffer overflow vulnerabilities in source code before deployment. However, it does not attempt to formally prove that an implementation matches a specification. Instead we take a more lightweight approach that allows our tool to run as fast as a compiler and still catch many vulnerabilities.

Although the case study did not reveal any buffer overflow vulnerabilities in Sendmail it did discover some important bugs in Splint.

William H. Haubert, III, An Interactive Approach to Secure and Memorable Passwords, April 2002. [PDF, Word]
Login schemes are required for authentication for access to a network. Most of the schemes use a username and password to allow users to login. Login schemes can be broken into two categories: those that are interactive and those that are passive. Many of the passwords schemes that are currently found in either category have flaws associated with them. Often times the schemes from the first category have slow login times, and the passwords in the second scheme are hard to remember. My project developed an algorithm, intended to generate a password that is secure and easy to remember. The algorithm requires the user to enter an n letter word. Two random letters are then inserted between every user-entered letter, with the first letter being a vowel. The password derives its security from the random letters and is easy to remember because of the user-entered letters. We analyze two properties of the algorithm: security and memorability. The first analysis produced a formula for determining n given a desired effective keyspace; n = log (keyspace) / log (number two letter combinations). The second analysis was an experiment that consisted of giving a set of users a random password and another set a password created by my algorithm. After one day the users returned and attempted to login. The results of this study were inconclusive: 61% remembered the random password and 65% remembered the algorithm password.
Lap Fan Lam, E-mail Viruses Detection: Detect E-mail Virus by Network Traffic, March 2002. [PDF, Word]
Electronic mail viruses cause substantial damage and cost of traditional anti-virus method is very expensive. This report presents a new anti-virus method, which runs anti-virus program on mail server and detects e-mail viruses by mentoring network traffic. The program is called e-mail traffic monitor. E-mail traffic monitors can potentially reduce anti-virus cost since it only needs to install on mail server. E-mail traffic monitor can also detect new virus based on their behavior.

Simulation model and e-mail traffic monitor prototype has been developed in this project to test whether this method is possible. This report states whether this is possible based on the simulations results.

Errol Charles McEachron, A System for Synthesizing Swarm Programs, April 2002. [PDF, Word]
Recently, advancements in computing technology have revolutionized the traditional concept of computer programming. Future programs will operate on collections of mobile processors that communicate over wireless networks and function in dynamic environments. These collections can be viewed as computational swarms, similar to those swarms found in nature, such as ants or bees. As with any swarm, its behavior emerges from the collective behaviors of its individual members. Thus, a swarm's behavior must be resilient to the misbehavior of a few individual members. This concept marks the fundamental difference between swarm programming and traditional programming.

Swarm programming requires a flexible system that can handle the dynamic nature of swarm environments and the random failure of swarm devices. This requirement makes swarm programming rather time consuming and quite tedious. This project addressed this issue by developing a software system to generate swarm programs from a set of high-level descriptions.

The software system was tested with an example search-and-rescue application, which includes the disperse, converge, and rescue swarm behaviors. Although the software system successfully assimilated the three behaviors into an acceptable swarm program, the test results indicate that the system may not handle all variations of the same swarm application equally well. Studying this issue and making any appropriate modifications to the software system could be an interesting application of future research.

William E. Oliver, Analyzing Group Behavior: Developing a Tool to Evaluate Swarm Programs, 26 March 2002. [PDF]
Swarm computing is a promising, state of the art area of research in computer science. It is a field with many possible applications, which makes it an important research topic. The basic definition is: programming a group of computing elements to work together and achieve some goal. With the recent rapid advancement in computer hardware, swarm computing has become a reality and now it is up to computer science to make use of these advancements.

The Department of Computer Science at the University of Virginia has been working with swarm programming by applying it to simulated soccer. The broad goal of this project is to aid in this research initiative. Swarm programming provides a special challenge in that most programs do not simply succeed or fail. There are many levels of success and failure which must somehow be measured. The specific objective of this project is to achieve this by developing a software tool that can evaluate the performance of a simulated soccer team's defense.

Given a XML server log created by a simulation, the tool evaluates the team's defensive performance. Five separate evaluation criteria were developed to do this. The software was developed with information hiding and modularity in mind, and is therefore easier to work with from a programmer's standpoint than less organized code. The software was also designed with flexibility and extensibility in mind so that future users can add or remove their own evaluation criteria as well as give weights to each one.

The resulting software has both advantages and disadvantages over manual evaluation. It is much faster, more flexible and extensible, and can provide quantitative analysis and results. However, it could require heavy computer usage, has a limited evaluation scope, and its correctness can only be determined through manual means. The software should be useful to programmers attempting to get an idea of how well their group behavior controlling software is performing. It provides a quantitative analysis of the behavior of a simulated group of objects so that the programmer may see the strengths and weaknesses of the algorithms in use. The short term goal is a tool to evaluate a specific type of swarm program, which the developed tool does. The long term desired result is better swarm programming and an increased potential of swarm computing. This cannot yet be determined, but the potential is there.

Kenneth Pickering, Evaluating the Viability of Intrusion Detection System Benchmarking , March 2002. [PDF, Word]
This report evaluates the DARPA-LL intrusion detection system evaluation. Intrusion detection systems are not easily constructed or maintained due to the almost daily evolution of network traffic and known exploits. The most popular and rigorous system devised to date is the DARPA-LL 1998 and 1999 Intrusion Detection System Evaluation. I will evaluate it through analysis of the documentation published for the lab as well as experimentation using different rule customizations.

Snort was selected because of its price and easy customization. Through manipulation of its rules files, it was to be customized to perform better in certain situations using the DARPA-LL evaluation criteria. This shows that this benchmarking system can be easily manipulated. Developers looking to enhance performance can alter their rules files to better detect attacks. This system could be manipulated to produce better results, and thus becomes less a test of developers testing their true systems and more a test of how well developers can interpret the testing data.

This project shows that benchmarking intrusion detections systems cannot be done effectively at this time. Until we develop more advanced artificial intelligence and datamining techniques, it will be very hard to evaluated intrusion detection systems. The amount of customization that goes into effectively using one, as well as the ever-changing number of viable network exploits makes it impossible at this time.

Douglas Ross, Hyperfridge.com, March 2002. [PDF, Word, hyperfridge.com]
Magnetic Poetry was the brainchild of artist/musician Dave Kapell. Magnetic Poetry is a social game. Several hundred thin vinyl magnets, each with a single word on it, are placed on some metallic surface in a communal area within a household (often the refrigerator door). Over time, housemates and guests arrange the magnets into short poems or haikus, records of epiphanies, erotic suggestions etc.

Hyperfridge.com recreates this game on a global scale via the World Wide Web. People who visit the Hyperfridge website can drag and drop virtual magnets on a virtual refrigerator door. Their expressions (poems) remain for all future visitors to read or deconstruct for words components to make their own poems. The positions of all the magnets are stored in a database.

On Hyperfridge, visitors may add new words. New words are accepted at a maximum rate of once every tenth of a sidereal day (8,616 seconds, or roughly 2 hours and 40 minutes). Each new word replaces a seldom used word, keeping the total number of word magnets balanced at 500. A spell checker responds to unknown words by suggesting alternatives, but, if the visitor insists, Hyperfridge will accept almost any word (under 26 letters long).

A major design goal for the Hyperfridge project was that it be capable of sustained autonomous operation. Hyperfridge went live on the web on Valentine's Day 2002 and has required only slight adjustments to its internal operations since then.

J. Adam Sowers, Using Identity-Based Encryption to Eliminate Certificates in SSL Transactions, 26 March 2002. [PDF]
This thesis report discusses an alternative implementation of the current Secure Sockets Language (SSL) protocol in use for secure communications on the Internet. Using a different cryptographic protocol than the current SSL standard, this new implementation uses identity-based encryption (IBE) to eliminate the need for server-side certificates. The new system, called IBE-SSL, involves the use of a private key generator (PKG) to create a private key for the server. The server can then use its private key to decrypt any messages sent to it by a client using the server's DNS name as a public key. The system can be implemented into modern browsers, and would provide an alternative security system for web servers and clients.

The report includes a history of the new system along with the underlying mathematical basis which provides the security of the system. The report also includes the IBE system and its method of securely generating keys and the encryption and decryption functions. Then an overview of the IBE-SSL system implementation is given. An efficiency analysis is provided to gauge the feasibility of using the implementation in an industrial setting. A simple, yet complete, implementation for the IBE-SSL system was completed and the source code is available online (ibessl-1.0.tar.gz). The system includes a sample private key generator, as well as test client and server.

Lim Vu, Securing Web Communications, 25 April 2002. [PDF]
Web services today, such as real time instant messaging and file sharing are insecure. This thesis implements an encryption program that has the capability of giving confidentiality to these services. The encryption program acts as a proxy server using modern encryption to secure all data transmissions of any Internet communication service. Thus, any application or service with proxy server support has the capability of securing its transmissions.

With an increasing reliance on digital data, the security of the digital communications medium has become more important. The sensitive information these digital lines carry can be privy to anyone without proper protection. This thesis fulfills a gap in the protection of newer, web communications using an encryption proxy. We describe the design using UML. Its implementation makes use of open source software and the Java Cryptographic Extensions to allow a robust, portable, and efficient encryption proxy.

This system can be used to reduce costs in porting applications, and give companies increased profits by allowing them to enter a new market more cost effectively.

2001

Christopher Barker, Static Error Checking of C Applications Ported from UNIX to WIN32 Systems Using LCLint, March 2001. [PDF, Word]
Since personal computer prices have dropped dramatically over the past decade, more companies are feeling a market push to support their software on new platforms such as Microsoft's Windows. Companies have found it to be more cost effective to move older software to the new platforms than to build the software again from scratch. Therefore many companies are attempting to port their old C applications from UNIX to WIN32 operating systems such as Windows NT or 2000, which run on the cheaper PC hardware. When porting C applications from UNIX to WIN32 systems, however, there are a variety of coding issues that can cause the newly ported application not to function correctly. Since it is important for companies to stay competitive in this changing market, these porting issues should be addressed in an efficient manner. By examining documented sources from past software porters and project managers, I compiled a list of coding issues. I used this list in conjunction with LCLint, a popular static checker, to create a set of user-defined annotations in LCLint. These annotations allow LCLint to recognize specific porting bugs and to warn the programmer.
Felipe Huici, A Database-backed Personal Information System for Automatic Creation of Home and Summary Web Pages, March 2001. [PDF, Word]
Like most academic communities, Computer Science graduate students at the University of Virginia need to share personal information. Unfortunately, until recently there was no simple or quick way of doing this. A student could provide a text file in his or her home directory, but only users with accounts to the Computer Science server could view it; alternatively, a student could create a home page, but this was a very time-consuming. To solve this, I have implemented a system that reduces effort and time and offers a simple means of accessing the information. Using shell scripts, Perl, PHP, and MySQL, the system collects the information from students' text files and deposits it in a database. Scripts then allow anyone with access to the Internet to view automatically generated home pages, and to search and summarize the information in them. I concluded that a system based on database-backed home pages saves time, allowing a user to have a home page up in literally minutes, and provides a wide-reaching medium for information sharing.
Jennifer Kahng, Evaluating Web Browser Security Interfaces for a More Meaningful Design, March 2001. [PDF, Word]
As more and more services become available on the Internet, the issue of online security gains importance. The World Wide Web, a common method for interacting with the Internet, provides mechanisms for people to take advantage of many services: accessing bank accounts, purchasing materials, conducting research, etc. Web browsers, software used to access the Web, also provide protection against security vulnerabilities online. However, current web browser security messages lack meaningful content and often display in inappropriate situations, interrupting the user unnecessarily. Thus, users learn to ignore or remove the messages, even though they may be helpful in certain situations. Web browsers utilize security policies to determine when to display security warnings but currently they are too generic. Before developing stronger policies, some mechanism to regain user attention should be in place or the policies may be ineffective. This thesis project evaluated alternate designs for security warnings. The results illustrate that attracting a user's attention in appropriate situations is difficult. Simply modifying the format or layout of a security message is not sufficient to capture the user's attention and sustain it. Combining new warning designs with stricter policies and promotion of user education in security should help users become aware and alert of their computing environment.
Ryan Persaud, Investigating the Fundamentals of Swarm Computing, March 2001. [PDF, Word]
Swarm computing involves devices operating like a swarm of insects. A swarm device is a small mobile robot with sensing capabilities. Therefore, swarm computing is significantly different from traditional computing methods in that swarm programs must be able to handle unknown environments and nodes that are capable of moving. Because swarm computing differs so drastically from traditional computing, traditional programming techniques will not work in swarm computing. Before such techniques can be developed for swarm computing, a better understanding needs to be gained of a swarm of computing devices. Through experimentation, I sought to gain an understanding of the basic principles of swarm computing. I used the libraries in the Swarm Development Group's package to create multiple programs implementing the same swarm behavior (primitives). Having wrote these programs, I ran them in the Swarm simulator and compare characteristics such as the power consumption of nodes. Having completed this project, I delivered a set of swarm primitives and information that I have gathered on basic swarm behavior. These primitives are not particularly useful by themselves but they can be combined and integrated to form swarm applications that actually perform a useful task. Ideally, other researchers can use my insights to develop future swarm programs. The researchers can then write programs for swarms of computational devices to accomplish such tasks as the exploration of Mars, clearing the blockage in arteries or other tasks that require the coordination of a multitude of devices.
Daniel Rubin, The Security of Remote On-Line Voting, March 2001. [PDF, Links]
The infeasibility of remote online voting can be shown through a security analysis of its previous uses and technological risks. My project focuses on two cases where voters cast their ballots over the Internet --- the 2000 Arizona Democratic Primary and the University of Virginia Student Council Elections. I ran Student Council elections for two semesters and will recount my experiences as an elections administrator; Arizona will be evaluated based on reports and commentary of their online election. This project will also review the underlying technology that makes remote online voting possible and assess the security risks.
Adam Trost, Extendable Swarm Programming Architecture, August 2001. [PDF, Word]
Computing is beginning to change as programs start to execute over many mobile processors communicating over ad hoc networks. Collections of these processors can be described as a "swarm." The behavior of a swarm is categorized as the total behavior of all its individual components but, unlike traditional distributed programming, swarms exist dynamically in unpredictable environments. The major challenges are designing programs for the units with a desired swarm behavior and, on the other side, predicting behavior from the programs running on the units. The soccer simulation competition within the RoboCup 2001 conference is the medium of the swarm research. This conference uses a soccer simulation to focus on cooperation between autonomous agents in dynamic multiagent environments. The simulation league comprises of a server acting as the field, and eleven clients for each team, which act as the players. The field is an unpredictable dynamic environment, while the players are thought of as the cooperative swarm. The research addresses the challenges of swarms by implementing an extendable object-oriented architecture for a RoboCup soccer player.

Testing the ease of adding the centering and dispersing defensive behaviors displays the benefits of the program architecture. The extendable object-oriented design resulted in easily implementing the two behaviors into the swarm program of the RoboCup soccer player. Though RoboCup is only one application of swarm programming, the architecture can be applied to many others. If utilized, the swarm development community could evolve more effectively with endless potential.

Phil Varner, Vote Early, Vote Often, and VoteHere: A Security Analysis of VoteHere, March 2001. [PDF, PS]
Recently there has been much academic and popular discourse about sociological and political ramifications conducting public elections over the Internet. According to the many groups and individuals wishing to profit from online voting, the technical problems are solved, and only the political and sociological ones need debate. However, this is far from reality. There currently exist significant, insurmountable technical hurdles which make an adequately secure and private Internet voting system impossible. This thesis demonstrates the many potential security and privacy problems with one of the most widely used systems, VoteHere from VoteHere.net.

I first present background information on voting, cryptographic voting protocols, and available system implementations. I then present FaSSAMM, the Fairly Simple Security Analysis and Modeling Methodology, a technique for easily conducting security analyses. This process is then used to analyze the documented VoteHere system for vulnerabilities. The result is an easy to understand demonstration of security problems with Internet voting which can be easily extended or detailed further. After conducting this security analysis, I believe even more adamantly that Internet voting should not be used for governmental elections now or in the near future.

Julie Vogelman, Determining Web Usability Through an Analysis of Server Logs, March 2001. [PDF, Word]
Almost half of American companies currently conduct business online. Knowing how to create usable web sites is valuable information for companies on the World Wide Web, since providing a satisfactory web experience will encourage clients to stay with the company rather than seek out the competition. Although there is a lot of research already available on the subject of creating usable web design, the research is incomplete. To close one gap in the research, I have built a framework for analyzing users' reactions to web sites. The application accepts two versions of a web site and redirects half of users to one version and half to the other. It then analyzes log files to compare how users reacted to each of the two versions, including how long users spent on each page before leaving and which links they selected. This project also used the framework to compare the usability of Arial font to that of Times New Roman and found Arial to be more usable. A second, more tightly controlled experiment sought to identify the ideal quantity of bold text on a web page.

David Evans
evans@virginia.edu
Last modified: Thu Jul 18 00:30:23 2002