Wireless Sensor Networks Lab -- Winter 2006
Click here for a description of course projects for "Vision Sensor Networks Lab" in Winter 2007
- Project Tracker: Color-based Multiple Agent Tracking for Wireless Image Sensor Networks
- Frances Lau
- Emre Oto
- Project Localizer: Robot-Assisted Localization Techniques for Wireless Image Sensor Networks
- Hattie Dong
- Huang Lee
- Project Dispatcher: Hybrid Network-Agent Vision
- Stephan Hengstler
- Daniel Kumar
- Hyunseung Paik
- Ali Maleki Tabar
- Project PEG: Pursuit-Evasion Game with Image Sensor Networks
- Eddie Kim
- John Poon
- Jamie Wu
- Marshall Yuan
- Project Soccer: Robotic Game Strategies
- Chien-An Lai
- Juo-Yu Lee
- Project Horus: Distributed Agent Control with Wireless Image Sensor Networks
- Chris McCormick
- Pierre-Yves Laligand
This project presents an implementation of a color-based multiple agent tracking algorithm targeted for wireless image sensor networks. It uses multiple image sensors to track and graphically display the path of autonomous agents moving across the overlapping fields of view (FOV) of the sensors. A color histogram is constructed for each agent to identify the dominant hue of each agent, and this hue value is used as a means of identification. The algorithm is able to reliably track the agents when collisions occur and to locate the relative position of each agent during collisions. This work also studies the possible use of color histograms in image sensor color balance self-calibration, which would be a possible future extension of the project. This algorithm has low computational requirements and its complexity scales linearly with the size of the network, so it is feasible in low-power, large-scale wireless sensor networks.
Color-Based Multiple Agent Tracking for Wireless Image Sensor Networks, F. Lau, E. Oto, and H. Aghajan, Advanced Concepts for Intelligent Vision Systems (ACIVS) Sept. 2006.
This project presents a solution to localize wireless image sensor networks in which the sensors are parallel to the object motion plane, and in which a moving object can be controlled by the network. The localization algorithm for the scenario where the moving agent has knowledge of its global locations is first introduced. This baseline case is then used to build up more complex algorithms to localize networks in which the agent has no knowledge of its global positions. We introduce the notion of representing the network using a forest structure, to succinctly guide algorithm flow. The localization data calculated using the proposed algorithms satisfactorily reflect the results found using manual methods.
Robot-Assisted Localization Techniques for Wireless Image Sensor Networks, H. Lee, H. Dong, and H. Aghajan, IEEE Conf. on Sensor, Mesh, and Ad Hoc Communications and Networks (SECON) Sept. 2006.
The effectiveness of a wireless sensor network (WSN) depends primarily on its capability to provide sensing information from the deployed environment. One approach to introduce significant adaptability in a WSN is through the use of mobile agents that can be dispatched to acquire detailed information from events. In wireless image sensor networks (WISNs), a vision-enabled mobile agent can extend the effective range of a stationary sensor network on an ad hoc, on-demand basis. This projectintroduces the concept of dispatching such an autonomous agent towards a target outside the visual range of a stationary WISN. The design and implementation of a vision-enabled mobile agent supporting wireless communication with the network is described. Experimental performance results presented confirm the feasibility of vision-enabled mobile agents.
Pursuit-Evasion games have traditionally been simulated either with entirely virtual computer programs or individually sensing robots following specific individual algorithms. In our approach, we utilize �blind� robots that rely on an image sensor node to process real-time information of the playing field for movement algorithm calculation. In using a top-down view of the playing field, the robots have more information of their surroundings and can more effectively execute their objectives. In this project, we implement and evaluate the PEG algorithms using measurements made from the robots in action.
In this project, a robot game is realized based on visual data collection by an image sensor and radio commands to teams of robots. Each robot follows a strategy based on the status of its teammate and the opponents at any given time.
In this work, we propose a technique for network-based control of mobile agents by distributed image sensors. The proposed method is based on tracking the movement of the agents using vision-based processing by the network nodes. Real-time control commands are issued to the agents by the nodes via wireless links, guiding them to desired destinations. A vision-based network localization technique is developed to allow each network node to learn its location and field-of-view in a relative coordinate system. Protocols for hand-over of the agent control between the image sensor nodes are also proposed. The developed program allows for implementing various robot monitoring and control techniques, and has been used as an educational platform for distributed sensing and real-time control projects.
Distributed Agent Control with Self-Localizing Wireless Image Sensor Networks, C. McCormick, P.-Y. Laligand, H. Lee, and H. Aghajan, COGnitive systems with Interactive Sensors (COGIS), March 2006.