Research Engineer
I specialize in multi-sensory digital embodiment. I've worked on frameworks for olfactory and tactile experiences in XR, multi-sensory storytelling, and AI-assisted volumetric streaming. Outside of work, I enjoy drawing, reading, rock climbing, and discussing gaming classics.
I worked on experimental applications (interactive movie poster), developer tooling (runtime debugging tools using classpath scanning libraries), and cinematic VR experiences (VR-film). At Dreamscape, I worked with various teams (e.g., sound, designers, programmers, story).
At Baltu, I was a lead developer on SuperDoc, an On-The-Job training platform used to quickly capture and deliver the specialized skills and knowledge of any organization. One of my core focuses was building the REST API and seamlessly integrating it into the application.
I was project lead for an ASU AR-based application allowing students to explore content collected in ASU Dreamscape Learn experiences. Tested by 100+ students in introductory biology courses.
In this work, we build Planetary Visor, a virtual reality tool to visualize orbital and ground data around NASA's Mars Science Laboratory Curiosity rover's ongoing traverse in Gale Crater. For this project, I served as lead developer, building the early foundation of the project for future students to take over and conduct research on top of.
We have built a 3D terrain along Curiosity's traverse using rover images, and within it we visualize satellite data as polyhedrons, superimposed on that terrain. This system provides perspectives of VNIR spectroscopic data from a satellite aligned with ground images from the rover, allowing the user to explore both the physical aspects of the terrain and their relation to the mineral composition. The result is a system that provides seamless rendering of datasets at vastly different scales. We conduct a user study with subject matter experts to evaluate the success and potential of our tool.
During my internship at NASA MSFC, I developed a framework and application to create VR-based training applications. As we constructed training simulations, routinely consulting and testing with several teams (e.g., Designers, Mechanical Engineering, System Engineers), I observed that the downtime for incoporating changes to a training procedure added weeks of development.
Thus, we built a tool to uniquely tag objects and the operations associated with them (e.g., movement, combining) into a sequence of proccesses and tasks, saved into JSON. Additionally, using PiXYZ Studio SDK, we created then integrated a pipeline for importing complex cad models from a server into the Unity application at runtime.
As the lead developer of the Designing for Dreamscape project, I focused on coordinating integrations across different teams, including working closely with Unity Timeline developers for narrative construction, collaborating with modelers for asset updates, and collaborating with programmers for event and interaction creation.
Our class was one of the first groups to use the Dreamscape Learn virtual reality platform. The project involved creating a time-traveling climate-change scenario, and it received praise from industry professionals, including Walter Parkes, CEO of Dreamscape Immersive.
We developed a mobile AR Commencement experience for ASU Fall 2020 Commencement. Working alongside designers and programmers, I played a key role in backend engineering, creating scripted timelines, and leading the volumetric capturing of the speakers. The ASU Commencement app utilizes augmented reality (AR) technologies, allowing users to virtually participate in the ceremony through their devices' cameras.
Graduates can join key moments, such as standing on stage with President Crow and singing the Alma Mater with the ASU Chamber Singers. Built with Unity's Game Engine and supported by Google ARCore and Apple ARKit, the app features iconic ASU locations and immersive 360-degree footage of the Sun Devil Football Stadium
Our framework allows users to prepare a sequential state of animations. At the time of presentation, presenters can invoke the animations to simultaneously occur on HMDs and mobile devices. As lead developer, I worked on building this system then testing its capabilities with Dr. Tanya Harrisson at Amazon re:MARs 2019.
For a communication backbone, HoloLucination uses client- server HTTP requests for simplicity and scalability. As shown in the Figure above, every device that connects in session retrieves updates from the web server to switch its animation state index. Presenter and client devices continually poll the server through web requests to receive animation updates.
SuperDoc -- designed for hands on employees -- is a software application to capture and share the specialized skills and knowledge of an organization. As a lead developer at Baltu, I focused on building the core backend system, servicing the front end with a backend database.
I worked primarily in Unity, merging multiple SDKs to build a coherent experience allowing users to upload and retrieve any form of multimedia (i.e., 2D, 3D, image, video, audio) that is then rendered within the application. Further, I explored experimental features involving mapping knowledge tips and multimedia to AR anchors (e.g., Apple World Anchor, Azure Spatial Anchors, Oculus Anchors).
At Dreamscape Immersive, I helped develop experimental cinematic VR experiences, a hand- and body-tracking multi-user interactive movie poster system, and a set of internal tools. During my time with Dreamscape, I collaborated with various teams (e.g., professors, art team, sound design team, narrative team, and programming team), working on multi-sensory educational and entertainment VR experiences to be deployed for use by 1000+ ASU students and across 3+ store fronts.
For one of my projects, I proposed and developed a runtime debugging tool using the Reflection library to dramatically reduce downtime, allowing developers to inspect the code for virtual objects, invoke methods dynamically, and view/edit variables during runtime.
Then I worked on actor recording tools, mapping tracked facial geoemetry between mobile devices and VR devices (e.g., Apple iPhone, Meta Quest Pro) and incorporating IK Rigs using Unity's latest open solutions.
As a project lead and lead programmer for the Dreamscape Mobile application, I had the privilege of working with a dedicated team of students to develop a cross-platform mobile AR app using Unity. Embracing a sprint methodology, we made weekly check-ins to ensure progress and collaborated closely with students.
The app empowered students to engage in interactive classroom assignments and seamlessly review data from their VR experiences. Leveraging AWS server integration, Unity asset bundles, and Oauth2 authentication, we strived to provide a robust technical foundation. Through collaboration with various teams across ASU, the app aims to enhance the learning journey of over 150 students per semester in introductory ASU Biology courses.
Composed of a 3D video editor, and a simple streaming/playback system, my hope is to 1) provide developers with a simple framework to create 3D movies using consumer devices (i.e., iPhone, Quest), and 2) use it to make my own 3D movies. The goal is to provide a seamless experience in manipulating 3D videos, adding effects, and merging them with virtual environments. Under the hood, the system is built with various Unity frameworks and SDKs like Barracuda, URP, VFX Graph, Quest Integration, and ARKit. Key influences include work by Keijiro Takahashi and Marek Simonik, my work stems from their work. I’ve ported existing libraries to the Quest platform and written custom C++ libraries — loaded as linked libraries — to enhance performance for various tasks. The editor features multi-track support with customizable VFX graphs for various data types like voxels and point clouds. It offers flexible control over visual (# of particles, size of particles) and temporal properties (how frequently frames are updated). Users benefit from features like body segmentation and the ability to merge 3D videos with advanced cropping tools.
Introducing a simple 2D-to-3D stereo (sbs) video generator. Make any normal video 3D and viewable on a Quest. All processed on a local home PC, this framework utilizes a combination of image processing techniques and depth map predictions to generate separate views for each eye, creating a 3D effect when viewed with appropriate hardware. I made a simple Jupyter Notebook that people can run to generate and upload a sbs video directly to Quest. Dev can use whatever depth model (I tested with PatchFusion and Marigold).
This project is an amalgamation of technology and creativity. I was lead developer and my friends were the creative artists I worked with. The goal was to built the tools they needed to tell a personable story of how such technology can connect with users.
Using RGBD sensors to capture volumetric footage of real-world events, the captured footage is then anchored to the location of the event using Azure Anchors, which -- using Cosmos DB to sync world coordinates to volumetric capture footage -- enables users to experience the event in its original context.
In addition, the project offers the flexibility to replay the event in any environment. The backend of the system uses a database to map Azure Anchors to the volumetric capture files, which are then loaded onto a device for playback. This approach allows for a unique and immersive way to relive events, bridging the gap between digital and physical space.
SWISH, a Shifting-Weight Interface of Simulated Hydrodynamics, bridges the gap between visual and haptic experiences in virtual environments by providing a realistic sense of fluid motion in handheld vessels. This is achieved by utilizing virtual reality tracking and motor actuation to actively shift the center of gravity of the handheld device, emulating the dynamic nature of fluid motion. SWISH has shown potential in various applications such as chemistry education, worker training, and immersive entertainment, as demonstrated by user studies.
For this project, I worked on integrating the Nvidia Flex (fluid simulation) library with Unreal Engine and Unity. Additionally, I worked on mapping CoM movement to stepper motor commands.
The ASU 360 Virtual Campus application is a comprehensive web-based desktop and VR platform that enables users to explore various ASU campuses. As lead developer, I created an internal tool that empowers us to curate customized campus tours for each location using a custom JSON schema. This tool includes a user-friendly web interface, allowing instructors to seamlessly upload 360-degree pictures and assign specific campus locations to each image.
Additionally, I collaborated closely with a team of designers to ensure a seamless and visually appealing front-end interface. Lastly, I documented the project to facilitate a smooth transition for a newly assigned team taking over the project.
BSI (British Standards Institute) wanted to experiment with a training application to explore remote working options. As a constractor, I built a tool to construct asynchronous task-based training applications, allowing future developers to construct similar experiences. Additionally, my colleague and I implemented the ability to collect new kinds of user/employee performance data by tracking timeliness of task execution along with generating heatmaps based on head-gaze in VR.
To build this application, we rigorously followed a sprint methodology. We met with teams internal to BSI to document the processes involved with such an inspection then reviewed previous training material. Further, we interviewed with various teams to identify key performance metrics for an example use-case (seacrate inpsections).
At my old house, our staircase had no installed lighting, making it difficult at night to use the stairs. To solve this, we took existing LED strips, a microphone, PIR motion sensors, and a Arduino Uno to build a responsive lighting system for our staircase.
The 'illuminated stairs' respond to music (dynamically changing light pattern) and motion. We wired everything ourselves then 3D printed custom casing to mask away all the wires.
I built ChatbotAvatarAI to serve as a starting Unity project for developers, merging state-of-the-art technologies like OpenAI API, Azure Voice API, Google Cloud Speech to Text, and Oculus Lip Sync to craft an interactive AI-based chatbot interface.
This framework facilitates seamless integration with YOLO-NAS, allowing the NPC to stream virtual camera frames and receive responses from a YOLO-NAS server instance, creating an immersive experience. Users can engage in a conversation with the NPC, receiving near-instantaneous responses in an Azure-generated voice, while the NPC can also be configured to visually perceive its environment with YOLO-NAS.
The Smell Engine is a design framework and software/hardware system that presents users with a real-time physical odor synthesis of virtual smells. As project lead, I engineered the system closely with collaborator Dr. Rick Gerkin, conducted system and user studies, and open-sourced the system.
Using our Smell Composer design framework with the Unity Game Engine, developers can configure odor sources in virtual space. At runtime, the Smell Engine system presents users with physical odors that match what they would sense in the virtual environment.
To do this, our Smell Mixer component integrates with the Unity Game Engine to dynamically estimate the odor mix that the user would smell. Our Smell Controller runtime then coordinates an olfactometer to physically present an approximation of the odor mix to the user's mask. Through a three-part user study, we found that the Smell Engine can produce odors at granularities finer than our subjects’ olfactory detection sensitivity. We also find that the Smell Engine improves users’ ability to precisely localize odors in the virtual environment over state-of-the-art trigger-based olfactory systems
We introduce an AI- and edge-assisted Volumetric Streaming System (VSS) designed to distribute high-quality volumetric data between two endpoints efficiently. This research project focuses on creating an intelligent, multi-process, scheduling system to streamlines data manipulation with multiple data streams. As lead researcher, I built i) the core framework for feeding in data and transmitting it across the network, ii) a testing tool for collecting and visualizing system metrics, iii) a set of WPF, Python, and Unity applications to use the system in different runtime configurations, and iv) an API for other teams to experiment with our framework.
VSS supports reconfigurable network architecture (e.g., p2p, server/client), and streams volumetric data from clients to a recipient client via WebRTC for Python and SipSorcery for .NET. The receiver, built in Unity, processes and visualizes the incoming volumetric data, supporting the use of the Kinect device in both Python and .NET settings for the streaming of high-definition volumetric data.
As a starting prototype, we built a pipeline that incorporated body tracking models to single out human subjects within the RGBD frames, allowing us to discard non-essential pixels and prioritize regions of interest, thereby optimizing subsequent processing steps. Additionally, style transfer models are used to enhance the visual aesthetics of the RGB frames (and also showing the bottlenecks with such filter combinations).