User Tools

Site Tools


research:internal_small_version

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
research:internal_small_version [2016/11/04 14:47] anthonyresearch:internal_small_version [2019/05/22 16:08] (current) – external edit 127.0.0.1
Line 15: Line 15:
 ====== End of Spring 2015 semester notes ====== ====== End of Spring 2015 semester notes ======
  
-[taken by Daijreous Poole (?)]+[taken by __**Daijreous Poole**__ (?)]
  
 ===== Brainstorming ===== ===== Brainstorming =====
Line 31: Line 31:
 ====== 2015.9.28 lab meeting ====== ====== 2015.9.28 lab meeting ======
  
-===== anthony's notes before the meeting =====+===== AC's notes before the meeting =====
  
   * Specify display and PC to use   * Specify display and PC to use
Line 47: Line 47:
 ==== Comparison with large display version ==== ==== Comparison with large display version ====
  
-  * Stephanie: What's the focus: equating the Gigapixel display context, or difference between user interfaces?+  * ** Stephanie**: What's the focus: equating the Gigapixel display context, or difference between user interfaces?
  
-  * James: loss of proprioceptive cues for item location unavoidable+  * **James**: loss of proprioceptive cues for item location unavoidable
  
   * Draw bezels in picture.   * Draw bezels in picture.
Line 57: Line 57:
   * Pan out farther than the whole image, to include view of feet of the GP monitor?   * Pan out farther than the whole image, to include view of feet of the GP monitor?
  
-  * Anthony: give up on drawing cues from outside of frame of image.+  * AC: give up on drawing cues from outside of frame of image.
  
 ==== UI (User Interface) ==== ==== UI (User Interface) ====
Line 78: Line 78:
 ====== 2015.10.12 lab meeting notes ====== ====== 2015.10.12 lab meeting notes ======
  
-[by Anthony]+[by AC]
  
 ===== Software for tracking interaction with display ===== ===== Software for tracking interaction with display =====
Line 103: Line 103:
 http://linux.die.net/man/1/xev http://linux.die.net/man/1/xev
  
-=== Notes on software ===+====== 2015.10.24 Notes ====== 
 + 
 + 
 +===== Notes on software ===== 
 + 
 + 
 +__**SR**__ (2015.10.24): Downloaded and tried MouseTracker (http://www.mousetracker.org/)
  
-SR (2015.10.24): Downloaded and tried MouseTracker (http://www.mousetracker.org/) 
   * Very simple display, maybe too simplistic for our needs.    * Very simple display, maybe too simplistic for our needs. 
   * Not sure how pan/zoom could be applied through native GUI.    * Not sure how pan/zoom could be applied through native GUI. 
Line 121: Line 126:
 ===== Pseudo-code for a possible pan/zoom interface ===== ===== Pseudo-code for a possible pan/zoom interface =====
  
-[Anthony's notes ahead of meeting:]+[AC's notes ahead of meeting:]
  
 <code> <code>
Line 169: Line 174:
 ====== Notes about matlab tools for interactive UI ====== ====== Notes about matlab tools for interactive UI ======
  
-By anthony:+By AC:
  
 ===== Overview panel for zoomed-in images ===== ===== Overview panel for zoomed-in images =====
Line 190: Line 195:
 ===== Matlab efforts ===== ===== Matlab efforts =====
  
-anthony made a new directory on uniqua: ''/home/anthony/Documents/MATLAB/LLID/lsnd/''+AC made a new directory on uniqua: ''/home/anthony/Documents/MATLAB/LLID/lsnd/''
  
 "lsnd" stands for "Learning in Small Non-interactive Displays," which is meant to mean the opposite of the abbreviation of the overall study (LLID = Learning in Large Interactive Displays). "lsnd" stands for "Learning in Small Non-interactive Displays," which is meant to mean the opposite of the abbreviation of the overall study (LLID = Learning in Large Interactive Displays).
Line 249: Line 254:
 Exact notes on Blender as follows-  Exact notes on Blender as follows- 
  
-Blender Tasks:+==== Blender Tasks: ==== 
  
 -Record Video of participant. -Record Video of participant.
Line 262: Line 268:
 ---- ----
  
-James & Agilay: working on tracking participant responses and programming loops. Still a work in progress.+__**James & Agilay**__: working on tracking participant responses and programming loops. Still a work in progress.
  
-Stephanie: working on character movements within virtual environment. Significant progress made here! We can now make a character capable of crouching, however a little more work needs to be done specifying game physics (e.g. object and character physical boundaries).+__**Stephanie**__: working on character movements within virtual environment. Significant progress made here! We can now make a character capable of crouching, however a little more work needs to be done specifying game physics (e.g. object and character physical boundaries).
  
-Priya & Dale: working on boundaries and game physics. Still a work in progress.+__**Priya & Dale**__: working on boundaries and game physics. Still a work in progress.
  
-WHAT'S HAPPENING NEXT:+ 
 +\\ 
 + 
 +==== WHAT'S HAPPENING NEXT: ====
  
 We came to the conclusion that it's time to start sourcing our work into one working draft. Anthony believes the solution is to use Git. Git will allow us to concatenate our Blender files with the goal of creating a version of the Blender game that we can use for the actual study. We came to the conclusion that it's time to start sourcing our work into one working draft. Anthony believes the solution is to use Git. Git will allow us to concatenate our Blender files with the goal of creating a version of the Blender game that we can use for the actual study.
  
-GOAL(S):+=== GOAL(S): ===
  
 Finishing coding before SONA pool closes. Finishing coding before SONA pool closes.
Line 278: Line 287:
 Figure out remaining game physics and looping issues. Figure out remaining game physics and looping issues.
  
-NOTE FROM JAMES:+NOTE FROM __**JAMES**__:
  
 I definitely think the remaining issues can be figured out before the SONA pool closes. I anticipate having figured out all of the remaining kinks in the looping/respawning over spring break. I'll also work on the game physics. I definitely think the remaining issues can be figured out before the SONA pool closes. I anticipate having figured out all of the remaining kinks in the looping/respawning over spring break. I'll also work on the game physics.
  
-Special thanks to Stephanie for the demo today (everything looks really great and has a polished feel). Good job so far everyone!+Special thanks to __**Stephanie**__ for the demo today (everything looks really great and has a polished feel). Good job so far everyone!
  
 +\\
  
-Agilay:+__**Agilay**__:
  
 Found http://sourceforge.net/projects/minimousemacro/ Found http://sourceforge.net/projects/minimousemacro/
Line 291: Line 301:
 Determined not to be ideal -- ruled out Determined not to be ideal -- ruled out
  
- +__**James**__
-**James**+
  
 The seemingly random links I posted below reflect my attempt to make zoom/scaled distances on a screen congruent with actually moving up to an object in real-life. This approach is sort of a hybrid of the two different ways I was working on and is the most promising; the others would work, but individually they have pieces that can be successfully integrated to do what we want! I can explain more in a lab meeting/or in person. The seemingly random links I posted below reflect my attempt to make zoom/scaled distances on a screen congruent with actually moving up to an object in real-life. This approach is sort of a hybrid of the two different ways I was working on and is the most promising; the others would work, but individually they have pieces that can be successfully integrated to do what we want! I can explain more in a lab meeting/or in person.
Line 306: Line 315:
 There are two or three approaches to tracking cursor behavior (including panning/zooming) using blender. There are two or three approaches to tracking cursor behavior (including panning/zooming) using blender.
  
-Way one: +== Way one: ==
-*render video images in blender with shading +
-*use that shading to emulate 3d (panning/zooming)+
  
-Way two: +  * render video images in blender with shading 
-*translate panning/zooming as 3d movement by having two tracking points, one for the cursor +  * use that shading to emulate 3d (panning/zooming) 
-and one for a reference frame. + 
-*problem; figure out how to deal with movie image being cutoff (this stops the tracking and makes  + 
-mars reference frame tracking judgments made with blender...still figuring out the solution to this.+== Way two: == 
 + 
 +  * translate panning/zooming as 3d movement by having two tracking points, one for the cursor and one for a reference frame. 
 +  * problem; figure out how to deal with movie image being cutoff (this stops the tracking and makes mars reference frame tracking judgments made with blender...still figuring out the solution to this).
    
  
-To Do List:+=== To Do List: === 
 + 
 +  * get everyone up to speed on blender tracking. 
 +  * figure out if 3d shading render is a viable option for cursor tracking (panning/zooming too) 
 +  * be able to obtain coordinates from python text file. 
 + 
 +\\ 
 + 
 +====== Notes from 2016.10.8 ====== 
  
-get everyone up to speed on blender tracking.+===== Remaining Tasks: =====
  
-figure out if 3d shading render is viable option for cursor tracking (panning/zooming too)+  * Clean-up aesthetic of VE 
 +  * Make Blender file an executable module. 
 +  * Run pilot subject!
  
-be able to obtain coordinates from python text file. 
  
-#8/10/16 
-Remaining Tasks: 
-*Clean-up aesthetic of VE 
-*Make Blender file an executable module. 
-*Run a pilot subject! 
 ===== Exporting tracking data ===== ===== Exporting tracking data =====
  
-Stephanie found this page with instructions on how to extract raw object tracking data on 11/16/15:+__**Stephanie**__ found this page with instructions on how to extract raw object tracking data on 11/16/15:
  
 http://blender.stackexchange.com/questions/23218/getting-raw-data-from-object-track http://blender.stackexchange.com/questions/23218/getting-raw-data-from-object-track
  
 There is also an option in Blender under File -> Export -> COLLADA (.dae) that will create a COLLADA file. Wikipedia describes this format as a 3D interchange file format, and says python has a module called pycollada that will read this data. The link above suggests you can use python to export this data into .csv if needed. There is also an option in Blender under File -> Export -> COLLADA (.dae) that will create a COLLADA file. Wikipedia describes this format as a 3D interchange file format, and says python has a module called pycollada that will read this data. The link above suggests you can use python to export this data into .csv if needed.
- 
  
 \\ \\
  
 +===== 02/15/2017 Lab meeting: Re-introduction to study and Task List =====
  
 +1. Aesthetics: Presence and characteristics of floor, ceiling, walls, other computers, etc.
 +
 +2. Make Blender file an executable module.
 +
 +3. Putting words on the screen (one image with words and one image with just boxes)
 +
 +4. Generating and exporting the data file
 +
 +5. Making poison-boxes for all the words
 +
 +6. Blender tracking (Stephanie and James posted links about how to do this above)
 +
 +7. Writing IRB
 +
 +8. Assigning experimenter roles to everyone
  
  
Line 361: Line 391:
  
 ~~TODOLIST showdate:yes  ns:research:internal_small_version~~ ~~TODOLIST showdate:yes  ns:research:internal_small_version~~
- 
  
research/internal_small_version.1478285241.txt.gz · Last modified: 2019/05/22 16:08 (external edit)