hyperSense: Augmenting human experience in environments

Amrita Khoshoo
8 min readSep 1, 2020

Prof Dina El Zanfaly | Carnegie Mellon University, School of Design | Fall ‘20

9.3 // Intro

Hi, I’m Amrita. I’m currently a second-year MDes student in the School of Design.

I’m a huge fan of popular culture, interactive media, design, and learning. In my free time, you’ll likely catch me watching a quirky comedy show (latest show rec: What We Do In the Shadows), running outside, or spending time with family.

path to design: bay area, fitbit, cmu

Prior to CMU, I was a project manager by day and a dance teacher by night. I’ve always been drawn to making, and decided to transition into design by coming back to school. I’ve been at CMU for two years now, and it’s been an awesome experience.

9.8 // Engaging the Body

Design an interaction that engages the body physically.

I was thinking about connection and/or play in remote contexts. Specifically, what are different ways to feel connected to the context of someone else? I thought about two different options, although they could also potentially be stand alone. The first involves light + movement and the second sound + movement.

light + movement

9.15 // Arduino + Iterations

Arduino

SOS LED + Piezzo Buzzer Activities

Interactions

3 ideas that materialize a ceratin interaction

I was thinking about two questions: (1) How might we feel connected to other people in remote spaces? (2)How might we feel connected to other remote environments?

My first two ideas build on and combined ideas I presented last week. Last week, I was urged to think about the sense of touch and incorporate different materials and textures. I was also encouraged to think combining elements of my second idea — specifically sound — into my first idea.

Case Study Inspiration

9.22 // Refining Interaction + Arduino + Reading Tweets

Refining Interactions

This week, I continued to expand on the idea of connecting two spaces, and continue developing a richer, sensory-based interaction experience.

Connected aquariums through proximity + sound wave detection//visualizing proximity + sound through water, plant movements

Two small aquarium objects that detect proximity and sound waves. Proximity of someone to the aquarium in space 1, water/algae in space 2 will start to sway (similar to a seafloor). And vice versa. A person can come close and speak to the aquarium, which will cause aquarium in the second space to vibrate faster/seafloor to move faster.

Arduino

button + ultrasonic sensor

Reading Tweets

Being Outside The Dominion of Time explores the nature of temporality as a human constraint. Temporality is an important concept to study because “it is perhaps the most fundamental constituent of human cognition” (48).

Exploring the Reflective Potentialities of Personal Data builds upon the concept of slow technology to explore how making data more materially present and interactive can open up possibilities for reflective, memory-oriented experiences.

The Perception of the Environment explores the complex nature of temporality in which past, present, and future are intertwined amongst people and the environment.

9.29 // Refining Interactions + Phenomonology Reading Tweet

I’ll be working with Rachel, Christianne, and Isabel for the semester project. We met up this weekend to discuss potential How Might We research questions + possible concepts rooted around them.

HMW brainstorm // concept ideation

Reading Tweet

Phenomenology: Is the study of consciousness and experience, and the interplay between the body experience/sensations from being in the world. As I read the article, I could not help but think about the idea of “design designs.” That encompasses the loops between humans and the tools humans create and who this relationship that unfolds through time.

Concept Progress

We’re looking to develop a perceptual portal that connects remote spaces. We’re playing around with concepts of temporality and memory. How can we leave traces of what came before?

form + interaction

10.14 // Progress + Reading Tweet

It’s been a minute since I’ve updated this process blog. Over the past few weeks, my teammates and I have been refining our interactive installation.

storyboard credit: rachel!

Body to LED Mapping

Right now, we’re thinking through the tech-side of this mirror and starting to prototype various, smaller interactions. I’ve been looking into ways to map body position to LEDs. There is a number of ways it can be done:

Kinect — Max8 — Arduino — Adafruit Servos

Arduino Uno — Arduino EyeShield — LED Matrix

Webcam — Touchdesigner — Arduino LED Matrix

Arduino — Phototransmitter — LED Matrix

Kinect — Raspberry Pi — LED Matrix** (what we’ll use)

Other Technology to Look Into

touchdesigner-arduino connection // ultrasonic sensor controlled LEDs based on distance

Reading Tweet: CYBORGS! I really enjoyed this reading. I was immediately reminded of Donna Harraway’s The Cyborg Manifesto and of Kara Platoni’s talk Transforming Perception One Sense at a Time. I appreciate that inter-disciplinary groups of people are coming together to discuss long-term symbiotic man-machine futures. Also, I could not help but think about “trust” between human and machine. This encompasses both the trust of a machine working in close coupling with the body and in matters of data.

November Updates // Deep diving into the technical

We’ve been experimenting with smaller prototypes / prototyping in stages to get our primary interaction of body movements displaying on a matrix to work. Because we’re all designers, this has also helped scaffold the learning process of technical implementation.

Because I’m remote, Dina sent me all of the hardware needed to prototype various interactions for our project (!!).

the hardware setup: pi, rgb bonet, matrix, kinect, peripherals

Prototyping stages

I’ve been working pretty closely with my brother, who’s an engineer, to get these prototypes working (huge superstar thanks!🌟). Dina’s been helping us with the tech implementation of this project also (huge superstar thanks!🌟). It’s 100% been a learning process.

Kinect to MacBook / Raspberry Pi via OpenKinect + OpenCV

  • Getting Kinect depth data to display on MacBook, then the Raspberry Pi
  • Installations took some time but eventually worked with these two tutorials and some refinements: Installing OpenCV + Libfreenect Installation
kinect depth data to pi + snacks

REFINEMENTS for OpenCV to work with comp/matrix

Section 1: Installing Packages for OpenCV

do everything until step 6, then do:

sudo apt-get install libgtk2.0-dev

*rpi already has python 2 +3 installed

Section 3: Compiling OpenCV on your Raspberry Pi

use this for step 2 instead (*adding GTK on for UI elements):

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D ENABLE_NEON=ON \
-D WITH_GTK=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D CMAKE_SHARED_LINKER_FLAGS=-latomic \
-D BUILD_EXAMPLES=OFF ..

Section 5: Testing OpenCV on your Raspberry Pi

step one, instead do (python 2 not 3)

python

import cv2
cv2.__version__

Kinect to Pi to Matrix: 4 prototypes

Because the Pi runs on Python code, I worked with my brother to get these prototypes working:

(1) live video: Kinect depth data to display on a 32x32 LED matrix.

  • rpi-rgb-led-matrix github
  • Getting the matrix up and running required splicing together/refining code from here (demo threshold file) and here (rgb matrix as a display).
im waving at the kinect here, can see handprint on matrix

(2) past video: Kinect depth data to (1) record, (2) store, (3) replay on matrix.

  • Added trackbar to rewind video. Needs a monitor to work. The idea in the future would be to replace the monitor trackbar with a capacitive sensor/copper wire on the installation itself.

(3) superimposed past on live video: getting stored Kinect depth data to replay on matrix with live video superimposed on top.

matrix: shows superimposition of past + present — can see two sets of my hand // monitor: shows present video + trackbar which controls video rewind on matrix

(4) kinect depth data to trigger sound + volume: based on distance from kinect, sound volume will increase or decrease. Sound is ambient noise from environment.

Next steps include stringing together 6 matrices // getting the installation up and running.

Form

We’ve been playing around with materials + form at the 3D lab. More details to come…

Putting it together

--

--

Amrita Khoshoo

Interaction Designer | Carnegie Mellon University, School of Design | MDes ’21