Solar Drone
University of California, Berkeley
The problem of long-endurance flight has yet to be adequately solved with unmanned aerial vehicles (UAVs). Conventional powertrains are typically disadvantaged by limited capacities for non-renewable fuels, but in theory a renewable energy source would permit self-sustaining flight. However, most formal research to date concentrates on developing aircraft with immense wingspans, while overlooking miniature UAVs due to difficulties with effective downscaling. This team intends to bridge that gap by demonstrating multiple-day flight using an autonomous solar-powered UAV with a wingspan of 3m or less.
Meticulous power-saving techniques make continuous flight practical. Minimizing the mass of the airplane decreases the load on the motor, saving energy. Further efficiency relies on an embedded system that, with minimal overhead, can dynamically tune aircraft behavior for strategic power conservation, such as gliding, disabling non-critical electronics, and adjusting altitude. The objective for the UAV is to maintain a permanent station in the air without depending on persistent human direction.
Many applications would benefit from an inexpensive, autonomous, permanently airborne platform: weather tracking, emergency response communications, and high-altitude scientific research. Its smaller size, lower cost, and fewer maintenance requirements will allow the quick deployment of application-specialized drones in a sky-based network.
Sentinel
University of California, San Diego
Sentinel: The Intelligent Wildlife Video Recording System “Never before seen footage” is powerful because it provides a new perspective on the universe. All of a sudden, the world isn’t the same anymore; the universe’s dynamic nature is laid bare for all to see.
Though YouTube has effectively documented the complete anthology of human behavior, many animal behaviors remain a mystery. Sentinel, a wildlife video recording system aims to unravel this mystery by capturing the behaviors of elusive and endangered species on film. Currently, video traps either capture video continuously, leaving the end user with days of mostly useless video to try and sift through, or ineffectively capture bits and pieces of behaviors, leaving the user with incomplete observations. Sentinel is an intelligently triggered video trap that senses and sees its environment in order to efficiently capture high definition video of all animals that cross its path.
Columbia SWARM
Columbia University
There are numerous environments that for a myriad of reasons are inaccessible to human exploration. Our project plans on being a way to solve the problem of mapping out these environments by using a heterogeneous swarm of microcontroller controlled robots that autonomously take sensor readings and use this data to build a map of their environment. The exciting aspect of this project is that it is extremely applicable to being used in disaster relief zones, hostile environments and space exploration.
The main challenge of this project is to manufacture numerous low cost robots and incorporate them into a system that is robust enough so that the loss of a robot does not affect the function of the swarm. The overall system will incorporate sensor processing, wireless communications, probabilistic mapping and autonomous robot control. Our solution is unique and exciting in that it segregates the swarm into different roles based on the processing power of each individual robot, enabling our swarm to be more robust and utilize each robot to its fullest.
GT Accessors
Georgia Institute of Technology
Accessibility focuses on the degree to which people with disabilities can interact with the world around them. Unfortunately, most embedded applications (apps) for smartphones and tablets are not designed with accessibility in mind, especially for those with upper-body motor impairments. Imagine therefore the ability to expand access to technology if we provide alternative input interfaces to increase accessibility to tablet-based applications.
One such application area is in rehabilitation for children with Cerebral Palsy (CP). CP is the leading cause of childhood disability that affects function and development. Approximately half of these children sustain dysfunctions in upper-extremity activities. These children therefore tend to have difficulty in accessing devices that require fine motor control, such as the common pinch and swipe gesture operations required for tablet access.
As a partial solution to this issue, this team proposes the development of a unique interface device, TabAccess, which provides wireless access to tablet devices. TabAccess will utilize a sensor system that enables individuals whom lack fine motor skills to provide necessary inputs to control their desired apps. We will also test our interface device with a team-designed Virtual Piano Player based on clinical evidence that supports the use of music for pediatric physical therapy.
Team Alpha – GT Night Rover
Georgia Institute of Technology
The Intel Cornell Cup will provide a platform on which to perfect engineering designs and computing algorithms for the GT Night Rover. The GT Night Rover aims to store and utilize electrical and/or thermal energy efficiently while investigating systems for prolonging the useful mission life of a robotic planetary rover (planetary in the general sense). The final prototype will be an autonomous rover that locates sources of solar energy and continues moving through a full day/night cycle. This challenge will serve as a proof of concept for a more robust system and provide a platform for the design and construction of a rover that will survive in and provide persistent functionality over multiple day/night cycles while moving and collecting useful information. The use of the Intel Atom board will enable the implementation of more complex programming control algorithms while allowing for a clear margin of power usage in future versions of the rover, exploring novel capabilities of the Atom board and pushing the boundaries of engineering design.
Audio(G)Fusion
University of Houston
Audio(G)Fusion integrates an Intel processor with an electric guitar. The user may alter audio filters using a touchscreen that is mounted to the body of the guitar. Once the user has configured a filter, they may assign the filter to a preset button to quickly enable or disable it while performing. The user may choose any combination of filters. The maximum number of filters is limited only by the Intel processor’s power.
Green Lighting
Howard University
The goal of this Green Lighting project is to implement a system that will enhance the work environment in a given room by regulating the intensity of the lights throughout the day, while minimizing energy costs. This is to address the energy costs associated with maintaining standard or optimal lighting conditions for a workspace. The system will consist of light sensors connected to low-end microcontrollers for each relevant sector of the room which link back to a Tunnel Creek board which will process the data, make decisions as to how the intensity of each light fixture should be varied to maintain lighting standards, or user preference, and command the instruction to the lighting control circuitry. In addition to a physical override circuit, the user will have access to a web interface via which the main system controls for each room can be manipulated, thus making it easier to manage the entire building’s lighting from one terminal. The system will allow the users to experience appropriate lighting throughout the course of the day therefore creating a productive and enjoyable working environment while simultaneously saving energy cost for the respective institution. A prototype of this embedded system will be implemented and demonstrated to evaluate its effectiveness in workplace productive and energy cost saving.
AAPS – Automated Aero-Painting System
University of Massachusetts, Amherst
Unmanned aerial vehicles (UAV) are destined to revolutionize aviation technology. Though UAVs have served a variety of applications, there is still a need for reliable automation. To measure automation reliability, we must measure communication sustainability, functional feedback and autonomous decision making. For our particular project we will equip a quadrocopter with a spray paint canister, and set it up to paint a figure autonomously on a vertical surface.
What’s exciting and unique about this project is that not only does it make an approach to UAV automation, but also proposes a completely new application for UAVs. The challenge is to design an autonomous system that involves real-time actuator control and constant feedback evaluation during flight. To meet the needs of that design, this project consists of three main components: (1) Base Processing Unit (BPU); (2) Quadrocopter CPU (Intel Atom processor); and (3) Actuator Control System (ACS). The BPU will guide and command the quadrocopter during task execution. The quadrocopter’s CPU will relay information regarding its position and stabilization to the BPU. The ACS will be directly interfacing with the spray paint canister, and coordinated by the BPU and the quadrocopter’s CPU.
Team Wolf
University of Massachusetts, Amherst
The vision of augmented reality is to provide users with relevant information to supplement their normal interactions with their environment. This technology promises to fundamentally improve many common applications that currently require specific devices or cumbersome interactions (e.g., driving directions, searches for local information, information about individuals with whom we interact, etc.). To achieve this vision, however, there are numerous technical challenges that still need to be addressed. In this project, we propose to develop a system that can perform the key functionalities of an augmented-reality system: (1) sensing a user’s location and direction of view, (2) computing what to display in the user’s field of view, and (3) displaying the visual content without obstructing the user’s view of real objects.
We present a design concept that aims to surpass prior attempts in its simplicity, efficiency, and functionality, to create a truly real-time fusion of analog and digital worlds. Our design consists of a sensor unit, an Intel Atom Processor, and a goggle-based display. The sensor unit obtains the live position and movement data of the users and sends it to the processor via a USB cable. The data is then processed through an integrated GPU and projected through the goggle display.
JouleCycle Team
University of Massachusetts, Lowell
Obesity is recognized as serious public health problem that leads to many illnesses such as diabetes and heart disease. According to CDC, about one-third of U.S. adults and 17% of children and adolescents aged 2-19 years are obese. As getting exercise is an effective way to control obesity, the team proposes to design a gaming system called “JouleCycle” to help people get regular exercises and achieve caloric balance. The gaming system is built upon a human powered bicycle and an Intel Atom development board without using any battery.
Being battery-less, JouleCycle offers a bicycle for a game player to pedal to generate power that helps running the Atom board equipped with customized hardware and software. The team will design control circuits to measure energy generation and consumption, customized BIOS and OS to speed up booting process, and multithreaded user applications to leverage the power steps of the processor and accelerators. The gaming software will quantify and visualize the energy generation by the player and consumption by the system components and different tasks. To make the game interesting and enjoyable, the power generated by the player determines the game’s themes and levels.
Team Squirtle
Massachusetts Institute of Technology
As with all disruptive technologies, personal robotics has been in search of a killer application to truly drive forward technology adoption, growth, and innovation. We believe that one such application lies within the heart of biotechnology in the form of liquid handling robots – huge couch-sized machines designed to reliably and accurately transfer and mix liquids, used in everything from basic chemistry to the latest cutting edge cancer research. Despite such versatility, liquid handlers have traditionally been relegated only to high-throughput automation tasks. However, we believe that their use can extend much farther than that. In the same vein as the evolutionary step from mainframe computers to PCs, our project discards conventional wisdom regarding the traditional design of massive high throughput machines to build a liquid handler that is smaller, lower-priced, and more intelligent – designed around the core idea of allowing individual researchers to advance the frontiers of science with a tool that is faster, more accurate, and more tailored to their personalized liquid handling needs.
HAWK
University of Pennsylvania
We propose the prototyping of a sensor-enabled quadrotor aircraft platform to construct 3D building models to support search and rescue operations. This technology will provide sensor-rich visualization of buildings overlaid with temperature gradients, air contamination, and hazardous zones for more informed rescue strategies. The proposed system has the potential to lower first responder casualties and minimize mission execution time.
The goal of the H.A.W.K. project (Helicopter Aircraft Wielding Kinect) is to build upon a pre-built quadrotor aircraft platform to include a low-cost depth camera (stripped down Xbox Kinect) to conduct rapid Simultaneous Localization And Mapping (SLAM) of a building in a single fly-through. All Intel Atom-powered quadrotors will locally capture the depth and image data and transfer it to the base station for SLAM processing and rendering of a 3D building model. As each craft is deployed with multiple sensors on board, we will incorporate a thermal camera, CO, and SO2 sensor data to overlay thermal gradients and air contamination information over the captured 3D building model. The system incorporates advanced flight controls, runtime UAV coordination algorithms, and low-cost image and depth perception sensors to execute complex SLAM algorithms.
Kinecthesia
University of Pennsylvania
Over 284 million people are visually impaired worldwide: 39 million are blind and 245 million have low vision (World Health Organization, 2011). The goal of our project id to provide a simple, low-cost solution that provides a better aid than walking canes and Seeing Eye dogs for visually impaired people.
With that in mind, we created Kinecthesia – a wearable belt that can detect obstacles and alert the user to their location. We developed a prototype using a Microsoft Kinect for the obstacle detection and an array of vibration motors for the feedback system. When the user approached an object while wearing the Kinesthesia, the belt subtly vibrates. The intensity and location of the vibrations on the belt tell the user exactly where the obstacle is. Using this system, the user can feel their surrounds and navigate around stationary or moving obstacles.
This system also has applications beyond obstacle avoidance for the blind. It can be used by anyone in a low visibility situation like firefighters or miners. Additionally, the hardware could be re-purposed to be used as a navigation system to direct a user towards as destination rather than away from an obstacle.
KIDZ – The Mystics
The Pennsylvania State University
In today’s world, we are interacting with the digital world more and more. In every sector of our lives, smart technology has improved our lives. From laptops, to mobile phones, to tablets, digital content is becoming more and more accessible. InTouch seeks to bring this progress even further, by allowing individuals to interact with physical objects through the camera of their mobile phones. Effectively, we’re mixing together digital interaction and physical interaction, two ideas that were largely separate.
Some potential applications of our project are public bulletin boards, polling devices, and possibly even collaborative symphonies. The possibilities are limited only by the imagination. Imagine walking along the road, and seeing a publicly accessible canvas that anyone draw on simply by moving their fingers on their smart phones. You watch as others play around with the graphics, adding to each others’ work. By creating this accessible public space, we are capturing the imagination of passers by, bringing together strangers who would otherwise never have even met. By merging together the physical and digital worlds, we are creating an unprecedented level of interactiveness in public areas.
IVS
Portland State University
Nowadays, the development of medical care helps to save millions people and cure numerous known diseases. However, the introduction of a substantial number of pills and drugs that follows has made identifying them increasingly challenging and time-consuming. In 2006, out of 100 persons, 46 persons have to come to the emergency rooms (ERs) and averagely spent 2.6 hours [1]. To address the problem, this paper will propose a solution namely Prescription Drug Identification device (PDI) which is capable of minimizing time, increasing accuracy and providing detailed instant drug information to both patients and doctors. The device is designed mainly for ERs but can also be used in doctor offices or at home.
The PDI with camera will capture all images of drugs. It will then extract all drugs characteristics such as imprint code, shape, color, etc through the image processing techniques. These features will be looked up in offline built-in or enormous online database to provide the information needed. Users can also manually go into their database for drug details or feed it with the latest information. A device having such breakthrough features will significantly help doctors in emergency rooms where time and precision are critical factors.
The Incredible HUD
Purdue University
The Incredible HUD (the device) is a novel approach to the subject of compact, portable heads-up displays. There is a growing need for compact, rugged and highly integrated augmented-reality displays that provide relevant and real-time information to the user. Be it motorsport, extreme sports, or even defense applications, there is no shortage of applications for a device that can enhance the user’s awareness of his/her surroundings. Current solutions are too expensive, too delicate, too bulky, or too complex for mass consumer appeal. This gives rise to challenges such as optics, weight, power consumption and device stability and durability.
While attempting to overcome the challenges mentioned and coalescing real-time data from a GPS receiver, accelerometer, thermometer and video-camera, the devise presents the data to the user in a flexible format. Telemetry data is logged for review on a computer at a later time. An Intel Aton motherboard provides video-processing and display generation capability. The display itself will allow an unobstructed view of the user’s surroundings without requiring refocusing of the eye to accommodate the display. The Incredible HUD’s purpose is to provide a useful display of telemetry to the user via high-quality helmet-based heads-up display unit.
Team DART
Seattle Pacific University
According to the Humane Society four out of ten U.S. households have a dog. This leads to a common problem and nuisance in today’s society: picking up dog waste from one’s yard. This practice is inconvenient and requires precious time out of every dog owner’s day. There are numerous companies that specialize in services dealing with the removal of dog waste, but these services are expensive and can quickly add up over time. The average dog may produce waste several times per day! Our team’s proposed solution to this problem is the AWR (Autonomous Waste Remover).
The AWR will automatically navigate your lawn and remove dog waste daily. The AWR will be a wireless and battery powered vehicle that will incorporate a complex sensor system to navigate around the yard while avoiding obstacles, collecting dog waste, and returning to a base station. The base station is where the collected waste can be disposed. The base station will also automatically recharge the AWR power system.
Team VISIONary
University of Southern California
Approximately 1.3 million people are legally blind in the United States. In the world, there are approximately 285 million people with some kind of visual impairment. These people have difficulty navigating in both indoor and outdoor environments. Many existing solutions partially solve the outdoor navigation problem but little attention is being given to the indoor navigation problem. As a result our team proposes a solution that integrates a Local Position Detection System (LPDS) with a Navigation System (NS) to successfully navigate a user through an indoor environment.
The LPDS will use local mapping and image processing to triangulate the user’s position and direct the user to his desired location, while the navigation system will utilize obstacle avoidance algorithms to ensure safe navigation to his desired destination. This composite system will first be tested in a closed environment that effectively simulates real life indoor navigation scenarios, and then it will be used by a number of visually impaired people in their daily routines.
Hot Dawgs
Southern Illinois University at Carbondale
According to the Department of Energy, in 2005, heating and cooling a house comprised around 49% of all household energy usage (1). The cost of heating and cooling homes is, on average in 2005, over $800 a year (2). The installation of a programmable thermostat can save $180 a year if set properly (3) .This is still only one sensor that may or may not be in an ideal location in the home.
Our proposal is to use an Intel Atom processor and board to monitor a network of temperature sensors in each room. From the data gathered the processor will adjust vents in each room to allow optimal cooling or heating of a house on a room by room basis. Our goals are to reduce the cost of heating and cooling a house, to make this system in such a way that it can be retrofitted into existing houses, which would benefit the most, and to provide an easy to use user interface.
Knights of the Workbench
Vermont Technical College
The irresponsible consumption of alcohol is an undeniable international problem. From the operation of motor vehicles while under the influence, to drunken bar fights; it is certain that wherever alcohol is involved the situation has a greatly increased level of danger. The current limitation of bartenders is their ability to recall the contents and quantity of drinks served to bar patrons, and to comply with legal policies regarding alcohol consumption, such as age limitations. These responsibilities are by no means easily manageable within the variable environments which alcohol is normally served.
For the Intel-Cornell Cup, we propose creating a fully automated drink mixing machine that will solve these problems. Through the utilization of modern computer technology, we aim to increase the effectiveness of current safeguards by monitoring the blood alcohol content of each customer, and use this information and other available metrics to limit alcohol consumption to responsible levels. The completion of this project will bring greater safety to the bars, streets, and homes of our world.
The FIVOLTS
Worcester Polytechnic Institute
Daytime drowsiness and fatigue lead to decreased driving reliability, lower working efficiency and fatal accidents. According to recent research, heart rate variability can be robustly calculated from the photoplethysmogram (PPG) to indicate parasympathetic nervous activity and classify drowsiness level.
Concurrently, part of our group will conduct biomedical research on correlations between any available physiological signals from the PPG sensor, including: heart rate variability, respiration rate, oxygen saturation (SPO2) and blood pressure dynamics during fatigue-inducing cognitive experiments.
As a solution, we will design a control center using the Atom board to receive PPG from a wireless headband using the ZigBee protocol, then processed the PPG to classify drowsiness levels.
Along with a built-in alarm, we also provided customizable response commands to peripheral devices such as track switching on a music player or flashing the vehicle’s emergency lighting. Not only efficiency and reliability can be ensured, but lives will be saved. Furthermore, the control center will be able to connect multiple channels of wireless PPG sensors to reduce cost. Our product could also be used as a consumer health monitor to provide low cost remote health care and synchronize physiological data to a server.
Think Chair
Worcester Polytechnic Institute
The aim of this project is to instrument a wheelchair with an intuitive control and navigation system that integrates voice recognition, face tracking, and hand gesture interpretation. It allows the user to easily select his or her preferred method of control depending on situational demands or personal needs. This robotic wheelchair will use the Intel Tunnel Creek platform and the Atom processor to perform necessary computations. The system will actuate the power wheelchair base, determine when the user is controlling the robot and combine multiple interfaces for greater usability.
Current commercial wheelchairs have drawbacks. These systems may not meet real-time requirements or are easily influenced by the environment or unintended actions of the user. They also do not accommodate users who cannot use their hands. No commercial wheelchair has the multi-mode capability of letting the user freely choose from asking the wheelchair to perform tasks orally, with gestures, or by moving his or her head. This project will use affordable commodity hardware to reliably allow people with disabilities who cannot currently use joystick-based power wheelchairs to become mobile and will make the wheelchair a user-friendly and pleasant experience for those who are already using traditional or existing electric wheelchairs.