“Project Argus” Mixed Reality Exploration

October 2015–December 2016

With Mina Fahmi.

Argus Panoptes: in Greek mythology, the all-seeing 100-eyed giant of watchfulness.

Argus was a project I worked on with a friend, Mina Fahmi, made possible by funding from the ProjX branch of MIT TechX. Project Argus consists of two headsets designed with the goal of providing the user with enhanced and augmented vision. The headsets are designed for unique use cases: the Argus Heavy is a larger stereoscopic device with more functionality that replaces the user’s entire field of vision, while the Argus Lite is a more portable device that works with only one eye, allowing the other to see normally. Possible applications for these devices include night urban or rural exploration, industrial work, and reconnaissance. I was interested in exploring wearable technology and human augmentation; this project also gave me an opportunity to work with the Raspberry Pi single-board computer.

Project Argus encompasses a number of functionalities. Both headsets enable night and low-light vision through the use of infrared LEDs; in addition, a rotating mechanism on the front of each device allows on-the-fly switching through various lenses. Both have a standard lens, fisheye lens, and macro lens; the Heavy adds 7x zoom lenses as well. A camera for each eye feeds images through a Raspberry Pi to a screen inside the device, which the wearer views through a convex lens (similar to those used in Google Cardboard). The units run on battery packs (10000mAh for the Lite, 2x 15600mAh for the Heavy) that provide over six hours of continuous runtime.

Argus is built as a versatile hardware platform for future software development: because the images are routed through the Raspberry Pi, there are endless possibilities in terms of information overlays and sensors that can be presented to the user in their field of view. Some ideas include adding an ultrasonic sensor to display distances to objects, enabling photo and video capture, overlaying temperature information, or experimenting with phone and internet connectivity.

Development

After deciding on components and basic mechanical designs of the devices (e.g. use of the Raspberry Pi, the rotating lens-changing mechanism), we built a cardboard prototype of the Lite to validate the design and examine the viability of image matching between the left and right eyes before we moved on final CAD.

Initially, we had wanted to use 2.8” TFT screens for their smaller size; however, we found that those screens communicate via the Pi’s GPIO pins only and thus don’t take advantage of the GPU, leading to poor performance and issues with outputting the video feed. Consequently, we chose the smallest HDMI screens available for the Pi (5”). Our prototype confirmed the viability of aligning a digital image with a natural one; with some adjustments to the size and position of the digital image, we were able to mentally combine the two images without strain.

Both devices were designed (primarily by Mina) in SolidWorks to be 3D printed. Printing was done over the course of about two weeks; assembly was fairly straightforward and required about one full day of work. Switches were incorporated into the final assembly to allow the IR LEDs to be turned on and off depending on need. To make the device comfortable and wearable, we used foam padding and a combination of both elastic and static straps.

To run the camera, I wrote a very simple Python script using the Picamera package by Dave Jones. The script starts the camera and displays a preview at a size and location that we adjusted during the development process, overlaying the date and time at the top of the image. Python (instead of a shell script, for example) was chosen so that adding additional functionality in the future (e.g. ultrasonic sensor readings, photo/video capture) would be simple and consist of just a few lines of code. SSHing into the Raspberry Pis made it trivial and fast to control the computers and fine-tune the code while the device was being worn; getting the images into alignment took under twenty minutes.

The entire development process, from design to finished product, took roughly two or three months.

Conclusions

The resulting headsets demonstrated successfully the viability of the vision enhancement devices we had originally envisioned. In early February 2016, we displayed and demonstrated Project Argus at MIT xFair to members of the MIT community and representatives of various tech companies; the overall response was positive and highly interested, and both devices held up well to the six hours of continued use.

Demonstrating Argus at MIT xFair, February 2016. Photo courtesy TechX. (Hi Jordan!)

These are initial prototype devices; possible future hardware work would involve simplifying and streamlining the design to make it more robust and manageable. In particular, it would be interesting to explore using the Raspberry Pi Compute module and custom PCBs to reduce size on both devices and eliminate the second Pi from the Heavy, as the Compute is much smaller and has two camera inputs. It would also be worthwhile to explore other camera and display options with the goal of creating a smaller, lighter, and more user-friendly device. There are also many software-end possibilities mentioned above we have not yet explored, such as adding various sensors and interfacing with other mobile devices.

Argus Lite.
Mina wearing Argus Heavy.
Argus Heavy in the dark with IR lights turned on.