Online Model Reconstruction for Interactive Virtual Environments

Online Model Reconstruction for Interactive Virtual Environments

Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok Dissertation Defense University of North Carolina - Chapel Hill April 12, 2002 Advisor: Dr. Frederick P. Brooks Jr. Committee: Prof. Mary C. Whitton Dr. Gregory F. Welch Dr. Edward S. Johnson Dr. Anselmo Lastra Outline Why we need dynamic real

objects in VEs Motivation Incorporation of Dynamic Real Objects Managing Collisions Between Virtual and Dynamic Real Objects User Study NASA Case Study Conclusion 02/24/20 How we get dynamic real objects in VEs What good are dynamic real objects? Applying the system to a

driving real world problem lok 2 Assembly Verification Given a model, we would like to explore: Can it be readily assembled? Can repairers service it? Example: Changing an oil filter Attaching a cable to a payload 02/24/20

lok 3 Current Immersive VE Approaches Most objects are purely virtual User Tools Parts Most virtual objects are not registered with a corresponding real object. System has limited shape and motion information of real objects.

02/24/20 lok 4 Ideally Would like: Accurate virtual representations, or avatars, of real objects Virtual objects responding to real objects Haptic feedback Correct affordances Constrained motion Example: Unscrewing a virtual oil filter from a car engine model 02/24/20

lok 5 Dynamic Real Objects Tracking and modeling dynamic objects would: Improve interactivity Enable visually faithful virtual representations Dynamic objects can: Change shape Change appearance 02/24/20 lok

6 Thesis Statement Naturally interacting with real objects in immersive virtual environments improves task performance and presence in spatial cognitive manual tasks. 02/24/20 lok 7 Previous Work: Incorporating Real Objects into VEs Non-Real Time

Virtualized Reality (Kanade, et al.) Real Time Image Based Visual Hulls [Matusik00, 01] 3D Tele-Immersion [Daniilidis00] Augment specific objects for interaction Dolls head [Hinkley94] Plate [Hoffman98] 02/24/20 lok 8 Previous Work: Avatars Self - Avatars in VEs

What makes avatars believable? [Thalmann98] What avatars components are necessary? [Slater93, 94, Garau01] VEs currently have: Choices from a library Generic avatars No avatars Generic avatars > no avatars [Slater93] Visually faithful avatars better than generic avatars? 02/24/20 lok 9

Visual Incorporation of Dynamic Real Objects in a VE Approach Handle dynamic objects Interactive rates Bypass an explicit 3D modeling stage Inputs: outside-looking-in camera images Generate an approximation of the real objects (visual hull) 02/24/20 lok 11 Reconstruction Algorithm

1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection 02/24/20 4. Composite with the VE lok 12

Object Pixels Identify new objects Perform image subtraction Separate the object pixels from background pixels current image 02/24/20 - background image = lok object pixels 13

Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels 02/24/20 lok 14 Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels

02/24/20 lok 15 Volume Querying A point inside the visual hull projects onto an object pixel from each camera 02/24/20 lok 16 Volume Querying

Next we do volume querying on a plane 02/24/20 lok 17 Volume Querying For an arbitrary view, we sweep a series of planes. 02/24/20 lok 18

Implementation 1 HMD-mounted and 3 wall-mounted cameras SGI Reality Monster handles up to 7 video feeds Computation Image subtraction is the most work ~16000 triangles/sec, 1.2 gigapixels 15-18 fps Estimated error: 1 cm Performance will increase as graphics hardware continues to improve 02/24/20 lok 19

Results 02/24/20 lok 20 Managing Collisions Between Virtual and Dynamic Real Objects Approach We want virtual objects respond to real object avatars This requires detecting when real and virtual objects intersect

If intersections exist, determine plausible responses 02/24/20 lok 22 Assumptions Only virtual objects can move or deform at collision. Both real and virtual objects are assumed stationary at collision. We catch collisions soon after a virtual object enters the visual hull, and not as it exits the other side. 02/24/20

lok 23 Detecting Collisions Approach For virtual object i Volume query each triangle N Done with object i Are there real-virtual collisions?

Y Determine points on virtual object in collision Calculate plausible collision response 02/24/20 lok 24 Detecting Collisions 02/24/20

lok 25 Resolving Collisions Approach 1. Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull 02/24/20 lok

26 Resolving Collisions Approach 1. Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull 02/24/20 lok 27 Resolving Collisions Approach

1. Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull 02/24/20 lok 28 Resolving Collisions Approach 1. Estimate point of deepest virtual object penetration

2. Define plausible recovery vector 3. Estimate point of collision on visual hull 02/24/20 lok 29 Resolving Collisions Approach 1. Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of

collision on visual hull 02/24/20 lok 30 Results 02/24/20 lok 31 Results

02/24/20 lok 32 Collision Detection / Response Performance Volume-query about 5000 triangles per second Error of collision points is ~0.75 cm. Depends on average size of virtual object triangles Tradeoff between accuracy and time Plenty of room for optimizations 02/24/20 lok

33 Spatial Cognitive Task Study Study Motivation Effects of Interacting with real objects Visual fidelity of self-avatars On Task Performance Presence For spatial cognitive manual tasks 02/24/20

lok 35 Spatial Cognitive Manual Tasks Spatial Ability Visualizing a manipulation in 3-space Cognition Psychological processes involved in the acquisition, organization, and use of knowledge 02/24/20 lok 36

Hypotheses Task Performance: Participants will complete a spatial cognitive manual task faster when manipulating real objects, as opposed to virtual objects only. Sense of Presence: Participants will report a higher sense of presence when their selfavatars are visually faithful, as opposed to generic. 02/24/20 lok 37 Task Manipulated identical painted blocks to

match target patterns Each block had six distinct patterns. Target patterns: 2x2 blocks (small) 3x3 blocks (large) 02/24/20 lok 38 Measures Task performance Time to complete the patterns correctly Sense of presence (After experience) Steed-Usoh-Slater Sense of

Presence Questionnaire (SUS) Other factors (Before experience) spatial ability (Before and after experience) simulator sickness 02/24/20 lok 39 Conditions Purely Virtual All participants did the task in a real space environment.

Each participant did the task in one of three VEs. 02/24/20 Real Space Hybrid Vis. Faithful Hybrid lok 40 Real Space Environment Task was conducted within a draped enclosure

Participant watched monitor while performing task RSE performance was a baseline to compare against VE performance 02/24/20 lok 41 Purely Virtual Environment Participant manipulated virtual objects Participant was presented with a generic avatar 02/24/20

lok 42 Hybrid Environment Participant manipulated real objects Participant was presented with a generic avatar 02/24/20 lok 43 Visually-Faithful Hybrid Env. Participant manipulated real objects

Participant was presented with a visually faithful avatar 02/24/20 lok 44 Conditions Sense of presence Avatar Fidelity Task performance Interact with 02/24/20 Generic

Virtual objects PVE Real objects HE lok Visually faithful VFHE 45

Task Performance Results Small Pattern Time (seconds) Large Pattern Time (seconds) Mean S.D. Mean S.D. Real Space (n=41) 16.8

6.3 37.2 9.0 Purely Virtual (n=13) 47.2 10.4 117.0 32.3 Hybrid (n=13)

31.7 5.7 86.8 26.8 Visually Faithful Hybrid (n=14) 28.9 7.6 72.3 16.4

02/24/20 lok 46 Task Performance Results Small Pattern Time Large Pattern Time T-test p T-test

p Purely Virtual vs. Vis. Faithful 3.32 0.0026** 4.39 0.00016*** Purely Virtual vs. Hybrid 2.81 0.0094**

2.45 0.021* Hybrid vs. Vis. Faithful Hybrid 1.02 0.32 2.01 0.055 * - significant at the =0.05 level 02/24/20 ** - =0.01 level

lok *** - =0.001 level 47 Sense of Presence Results SUS Sense of Presence Score (0..6) Mean S.D. Purely Virtual Environment 3.21 2.19

Hybrid Environment 1.86 2.17 Visually Faithful Hybrid Environment 2.36 1.94 02/24/20 lok 48

Sense of Presence Results Sense of Presence T-test p Purely Virtual vs. Visually Faithful Hybrid 1.10 0.28 Purely Virtual vs. Hybrid 1.64 0.11

Hybrid vs. Visually Faithful Hybrid 0.64 0.53 02/24/20 lok 49 Debriefing Responses They felt almost completely immersed while performing the task. They felt the virtual objects in the virtual room (such as the painting, plant, and lamp) improved their sense of presence, even though they had no direct interaction with these objects.

They felt that seeing an avatar added to their sense of presence. PVE and HE participants commented on the fidelity of motion, whereas VFHE participants commented on the fidelity of appearance. VFHE and HE participants felt tactile feedback of working with real objects improved their sense of presence. VFHE participants reported getting used to manipulating and interacting in the VE significantly faster than PVE participants. 02/24/20 lok 50 Study Conclusions Interacting with real objects provided a quite substantial performance improvement over interacting with virtual objects for cognitive manual tasks Debriefing quotes show that the visually faithful avatar

was preferred, though reported sense of presence was not significantly different. Kinematic fidelity of the avatar is more important than visual fidelity for sense of presence. Handling real objects makes task performance and interaction in the VE more like the actual task. 02/24/20 lok 51 Case Study: NASA Langley Research Center (LaRC) Payload Assembly Task

NASA Driving Problems Given payload models, designers and engineers want to evaluate: Assembly feasibility Assembly training Repairability Current Approaches

02/24/20 Measurements Design drawings Step-by-step assembly instruction list Low fidelity mock-ups lok 53 Task Wanted a plausible task given common assembly jobs.

Abstracted a payload layout task Screw in tube Attach power cable 02/24/20 lok 54 Task Goal Determine how much space should be allocated between the TOP of the PMT and the BOTTOM of Payload A

02/24/20 lok 55 Videos of Task 02/24/20 lok 56 Results The tube was 14 cm long, 4cm in diameter.

Participant #1 (Pre-experience) How much space is 14 cm necessary? #2 #3 #4 14.2 cm 15 16 cm 15 cm

(Pre-experience) How much space would you actually allocate? 21 cm 16 cm 20 cm 15 cm Actual space required in VE 15 cm 22.5 cm

22.3 cm 23 cm (Post-experience) How much space would you actually allocate? 18 cm 16 cm (modify tool) 25 cm 23 cm 02/24/20

lok 57 Results Participant Time cost of the spacing error Financial cost of the spacing error #1 #2 #3 #4

days to months 30 days days to months months $100,000s $1,000,000+ largest cost is huge hit in schedule $100,000s $1,000,000+ $100,000s

Late discovery of similar problems is not uncommon. 02/24/20 lok 58 Case Study Conclusions Object reconstruction VEs benefits: Specialized tools and parts require no modeling Short development time to try multiple designs Allows early testing of subassembly integration from multiple suppliers Can get early identification of assembly, design, or integration issues that results in considerable savings in time and money.

02/24/20 lok 59 Conclusions Innovations Presented algorithms for Incorporation of real objects into VEs Handling interactions between real and virtual objects Conducted formal studies to evaluate Interaction with real vs. virtual object (significant effect) Visually faithful vs. generic avatars (no significant effect)

Applied to real-world task 02/24/20 lok 61 Future Work Improved model fidelity Improved collision detection and response Further studies to illuminate the relationship between avatar kinematic fidelity and visual fidelity Apply system to upcoming NASA payload projects. 02/24/20

lok 62 Thanks Most importantly Funding Agencies My parents and family The LINK Foundation NIH (Grant P41 RR02170) National Science Foundation Office of Navel Research Committee Members Dr. Frederick P. Brooks (Advisor)

Prof. Mary Whitton Dr. Edward Johnson Dr. Anselmo Lastra Dr. Gregory Welch Collaborators Samir Naik Danette Allen (NASA LaRC) Effective Virtual Environments Group 02/24/20 lok 63

Recently Viewed Presentations

  • SCANNING THE MARKETING ENVIRONMENT Environmental Scanning The process

    SCANNING THE MARKETING ENVIRONMENT Environmental Scanning The process

    Environmental Scanning The process of continually acquiring information on events* occurring outside the organization to identify and interpret potential trends is called environmental scanning. * Events beyond management control Environmental forces affecting the organization, as well as its ...
  • What impact did John Marshall have on the

    What impact did John Marshall have on the

    Chief Justice John Marshall (1801-1835) One of the most important Supreme Court cases decided under Marshall was Marbury v. Madison in 1803. William Marbury James Madison This case resulted from a "midnight" presidential appointment, made by an outgoing president, which...
  • Week 6: October 13th - 14th

    Week 6: October 13th - 14th

    *Includes title, author, and genre *Mini-thesis clearly and completely develops an argument in response to all parts of the prompt. *Mini-thesis shows sophistication by tying all parts of the prompt together *Includes title, author, and genre *Mini-thesis restates the prompt...
  • hrist Church Mount Pello oving * Praying *

    hrist Church Mount Pello oving * Praying *

    Christ Church. Mission & Evangelism. Social & Community Outreach. Admin-istration. Pastoral. Nurture. Plant. Worship. Clergy. Youth & Children
  • By: Jacqueline Gilbert The life BEFORE US. Paleolithic

    By: Jacqueline Gilbert The life BEFORE US. Paleolithic

    Our First Source of tools was stone. We used it to cut, chop, and scrape food. Paleolithic Era Paleolithic era Cont. ... Times New Roman Curlz MT Algerian Default Design PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation...
  • Sparse Multi-task Learning for Detecting Influential Nodes in ...

    Sparse Multi-task Learning for Detecting Influential Nodes in ...

    References: Wang, Yingze, Guang Xiang, and Shi-Kuo Chang. "Sparse Multi-Task Learning for Detecting Influential Nodes in an Implicit Diffusion Network." AAAI. 2013. "Learning with Sparsity for Detecting Influential Nodes in Implicit Information Diffusion Networks", Yingze Wang, 2014.
  • Human Resources Office - University of Stirling

    Human Resources Office - University of Stirling

    HUMAN RESOURCES SERVICES Reporting Line Director HR Martin McCrindle 01786 46(6150) [email protected] HR Admin Support Departmental Secretary Lorna Whyte/Vicki Lawlor 01786 46(7137) [email protected] AREAS OF RESPONSIBILITY Estates & Campus Services University Services:-Principal & University Secretary's Office Deputy Secretary's ...
  • CME Faculty Slides_Update Diabetes Meds

    CME Faculty Slides_Update Diabetes Meds

    Comparing Medications for Adults With Type 2 Diabetes Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov Adverse Events and Side Effects: Monotherapy Versus Two-Drug Combination Therapy Mild to Moderate Hypoglycemia Moderate strength of evidence showed that the risk...