====== Overview ====== <2018-12-19 Wed> Notes for creating VT PSYC 4364 Senior Seminar "Cogntive Neuroscience of Decision Making" Spring 2019. \\ ---- \\ ====== Activities ====== ===== Critical reading questions ===== ==== Make list of submitted questions ====
  1. Download from Canvas

    Make directory for the week, e.g.

    mkdir ~/Google Drive/teaching/CNDM/discussions/wk2_gustation/article/submissions/

    Move

    submissions.zip

    to the "submissions" directory for the week.

    Unzip it, and REMOVE THE ZIP FILE. The script that handles extracting the text from the html files needs the submissions/ dir. to include ONLY html files.

  2. Extract questions from html files using scripts

    Edit this file to have the correct directory names defined near the top:

    ~/Google Drive/teaching/code/Canvas/text/extract_text_from_html_files.sh

    This file runs xsltproc using an associated .xsl script:

    ~/Google Drive/teaching/code/Canvas/text/getQuestionsCNDM.xsl

    Run the script from whichever directory you like, because the script uses absolute paths for filenames.

    bash ~/Google Drive/teaching/code/Canvas/text/extract_text_from_html_files.sh

    This will make a new file, whose name is defined in the script; CHANGE it to something appropriate, but not "final," because it will need to be edited by hand and then run through another script. Example:

    article_questions_UNSORTED.txt
  3. Edit the questions by hand

    Good regexp to replace: non-ascii characters. In particular, the script above often gets a non-printing character that looks like an orange underscore in emacs (it appears there, but not in other programs that display the .txt file).

    In emacs, to replace non-ascii characters with nothing:

    M-x replace-regexp [^[:ascii]]

    Ideally each question line begins with a category like "Clarification:".

    To sort the lines using emacs, select all lines, then:

    M-x sort-lines
    1. Clean up the lines

      They aren't always given the correct category labels, etc.

    2. Organize the questions into themes

      DON'T put spaces in between lines to separate the themes. That can be done after the next script appends question numbers to each line.

      Try these emacs procedures:

      1. Highlight one term

        M-x highlight-regexp

        and also M-x unhighlight-regexp.

      2. Kill and yank all lines with term

        I tried using M-x sort-regexp-fields but couldn't get it to do what I wanted, which was to sort all lines with a given term in them next to each other.

        However, found this emacs function, which allows me to do that by killing all such lines together so that they can be yanked back together, all next to each other!

        From https://www.emacswiki.org/emacs/KillMatchingLines

        (defun kill-matching-lines (regexp &optional rstart rend interactive) "Kill lines containing matches for REGEXP. See `flush-lines' or `keep-lines' for behavior of this command. If the buffer is read-only, Emacs will beep and refrain from deleting the line, but put the line in the kill ring anyway. This means that you can use this command to copy text from a read-only buffer. \(If the variable `kill-read-only-ok' is non-nil, then this won't even beep.)" (interactive (keep-lines-read-args "Kill lines containing match for regexp")) (let ((buffer-file-name nil)) ;; HACK for `clone-buffer' (with-current-buffer (clone-buffer nil nil) (let ((inhibit-read-only t)) (keep-lines regexp rstart rend interactive) (kill-region (or rstart (line-beginning-position)) (or rend (point-max)))) (kill-buffer))) (unless (and buffer-read-only kill-read-only-ok) ;; Delete lines or make the "Buffer is read-only" error. (flush-lines regexp rstart rend interactive)))
    3. Save to a new name

      If name generated by the script was *UNSORTED.txt, do *SORTED.txt.

  4. Append question numbers

    Run this script on the hand-edited file:

    ~/Google Drive/teaching/code/Canvas/text/add_question_numbers.sh [INPUT FILE NAME ending in .txt]

    The output file will be named like the input file, but with the .docx extension instead of .txt. It will be in MS Word format.

  5. Add spaces between themes

    Edit the .docx file by hand.

    Select all and change all text to bold.

===== Discussions ===== ==== Collaborative document ====
  1. Download document in two formats

    Using Google Docs, export file as both .docx and .txt.

    Suggested filename: wk[number]_[topic], e.g. wk2gustation.docx.

  2. Bash script to detect usernames in collaborative document

    Change filenames in this script, then run it using bash:

    ~/Google Drive/teaching/code/Canvas/text/find_usernames.sh

    Will produce an output file; current example name:

    username_scores.txt

    Gives count of each username (case-insensitive) found in .txt version of collaborative document.

==== Discussion leaders ====
  1. Assigning groups

    1. Use shuf in bash to randomize list of students

      shuf usernames_key_HANDFIXED_GOOD.txt > groups_shuffled_list.txt
    2. Select groups in order from randomized list

      Take first 8 pairs, then 4 triples.

      Assign names: GroupA, GroupB, etc.

      Save to document:

      groups_list.txt
    3. Assign groups on Canvas

      Name "group set:" discussionleadergroups

      Select "I'll create groups manually."

      Create groups GroupA etc.

      Drag student names into their groups.

    4. Post survey to Canvas

      Make it an assignment, not a quiz/survey. This way, it can be a group assignment on Canvas.

      Make submission "Text Entry" and ask groups to list their top three preferences in order.

  2. Summary document

    2019-08-26

    Made template .docx file

    ~/Google Drive/teaching/CNDM/syllabus/CNDM_discussion_summary_TEMPLATE.org
====== Materials ====== ===== Username list ===== These are in a file: usernamelist.txt ==== Making the list ====
  1. List of component words

    I used the "Mnemonics" usernames generated using website

    https://www.michaelfogleman.com/phrases/

    These usernames are pairs of words taken from the Mnemonic Encoding Word List.

    https://gist.github.com/fogleman/c4a1f69f34c7e8a00da8

    http://web.archive.org/web/20090918202746/http://tothink.com/mnemonic/wordlist.html

    http://web.archive.org/web/20091003023412/http://tothink.com:80/mnemonic/wordlist.txt

    I have copied the list to a file for future use:

    mnemonicwordlist.txt

  2. Generating two-component usernames

    2019-08-26

    [Couldn't find any notes on how I did this for the first semester, Spring 2019, so let's assume I did it by hand then.]

    By hand in emacs, made version of mnemonicwordlist.txt that has only the words, capitalized, in a single column:

    ~/Google Drive/teaching/CNDM/admin/usernames/mnemonic_word_list_one_column.txt

    Then can use "shuf" command to randomize:

    shuf mnemonic_word_list_one_column.txt > randomized_list.txt

    Finally, can then just remove every other newline to join together pairs of words (by hand), until have enough for the class to choose from.

    [Decided against the following, seems too annoying for students' use: Can also append "F19", "S20", etc. to beginning of strings to make it easy to tell when sections added to the documents, especially for the use of the website documents.]

    Delete the rest of the column. Save result as

    usernames_for_papercutter.txt

    for that instance of the course.

    Will most likely want to remove some combinations by hand, e.g. AlcoholSomething, or anything with a proper name (which are included in the mnemonic words). IMPORTANT: Also compare to past lists to try and avoid duplicating even single components that have been used in the past, if possible. Probably easiest to do this using the file "randomizedlist.txt".

    After letting students choose names from pieces of paper, cull the list and call it

    usernames_list.txt
==== Making key to match usernames with student names ==== Download submissions to a non-credit "assignment" from Canvas for which student submitted the text of their usernames. Edit the top of this script by hand to set the directories on which it will run. ~/Google\ Drive/teaching/code/Canvas/text/extract_usernames.sh This will produce a file usernames_key.txt which has two columns: students' last names and usernames. NOTE however that if students have two-word last names, only the final name will get printed. To overcome this limitation, and to produce a document that can printed and used as an attendance checklist, can run this version of the previous script: ~/Google\ Drive/teaching/code/Canvas/text/extract_realnames.sh which produces realnames_key.txt This prints three names for all students (prints an "x" for middle name of those with only two names on Canvas) as well as the username, in four columns. IMPORTANT NOTE: after compiling the list using scripts, e.g. extract_usernames.sh MUST check for non-printing characters, which will often be included in output of adc's "extract from Canvas html" scripts. Emacs will display, e.g. the non-printing character that looks like an underscore in emacs. Remove all of these! Otherwise the script that uses grep to find usernames in the collaborative document find_usernames.sh will fail for names that have the non-printing characters as part of the username string. ===== Email list ===== Online search for "Canvas email list" led me to: https://community.canvaslms.com/thread/9885 Which included instruction for using Canvas API via URLs: James Jones Thank you for your thorough API Request information both here and on another page I visited yesterday. Using what you shared, I was able to piece together a solution for my situation and figured I ought to share it to benefit the community. Applying your API request to the URL looks something like this: https://canvas.institution.edu/api/v1/courses/XXX/users?enrollment_type[]=student&include[]=email&per_page=100&page=1 To modify this to your case there are three things to know: canvas.institution.edu needs to be your organization's page XXX needs to be the course number If there are more than 100 students, you increase the page value at the end of the URL to get the next 100 students. You can leave out the "while(1);" from the beginning of the JSON code, copy the rest, and paste it into a converter. P.S. If there are more than 100 students, replace the ending hard bracket with a comma and add the next page's results to the string before running the converter to get it compiled into a single document." Link to a "converter": http://www.convertcsv.com/json-to-csv.htm I could have used converter, but instead I just pasted the text from the resulting web page into a file email_line.txt and used grep: grep -o "[a-z|0-9]*@vt.edu" email_line.txt | sort | sed s/$/,/g > email_list.txt "-o" option makes grep print only the matching text. sed command adds commas to ends of lines – did this so could paste list into Zotero group membership invite web page. ===== Zotero group ===== I used the email list (above) to invite students to the group I created: cognitive_neuroscience_decision_making 2019-09-01 Last week, I backed up the Spring 2019 semester's zotero library in two ways, and then deleted it from the online group to make way for the Fall 2019 semester's students. - Exported library to zotero rdf format, added to a new directory called ~/Google Drive/teaching/CNDM/zotero_group_lib/ - Moved all the files within zotero out of the shared group folder to the the CNDM folder under "teaching." ====== Publishing ====== ===== Website ===== 2019-08-05 This is about publishing at least the article discussion summary documents on dokuwiki. Currently the idea is to do this process separately for each week's discussion. ==== Summary list of steps for posting summary documents to dokuwiki web pages ==== * Hand edit .docx file * Add Heading "Topic: [topic name]" at top * Add one-sentence explanation of topic if necessary * Add "Article Discussed" section * Add reference to article * Move neuroimaging section to after Brief Summary * Add sentence with link to Neurosynth * Fix formatting, at very least Top 5 Pubmed Articles text * Move the Unanswered Questions to main questions section * Make "Questions Posed by Class" L1 heading * Make/check topic L2 headings for questions * On Linux terminal * Modify and run the bash script to generate dokuwiki .txt file from .docx * On dokuwiki site * Make a dokuwiki page for the topic * Paste contents of dokuwiki .txt file * Upload image files using media manager * Hand edit formatting as necessary * Add link to Zotero library to topic page ==== Prepare the summary document ==== Anthony decided to change the format, compared to the assignment instructions, for the web pages.
  1. Modify the .docx file

    1. First section is reference to the article

      This was not part of the Spring 2019 semester instructions, so add it if necessary.

      Section name (headling level 1): "Article Discussed"

      Body of section is a citation.

      Ideally this reference will appear in the Bibliography section, but don't mess with Zotero if fixing retroactively.

    2. Put the neuroimaging parts near the beginning, instead of at the end.

      Currently (based on the instructions for the Spring 2019 semester) these come after the Brief Summary.

    3. Check the .docx document for any weird or nonstandard formatting, and fix it.

      Example: the "Top 5 Pubmed articles" section in wk2gustation was formatted as a set of single lines with newlines at end of each, for some reason.

    4. Move the two "Unanswered Questions" text into the thematic sections of answered questions

      Because they have been answered now!

  2. Convert from docx to dokuwiki format using pandoc

    The following will create a directory named "./media" with image files in it named image1.png, etc.

    pandoc --extract-media . input.docx -o output.txt -t dokuwiki

    This is from https://stackoverflow.com/questions/39956497/pandoc-convert-docx-to-markdown-with-embedded-images

==== Make the dokuwiki .txt file for the web page ====
  1. Fix the dokuwiki file

    1. Bash script to automate the following steps

      ~/Google\ Drive/teaching/code/dokuwiki/cndm_fix_dokuwiki_txt.sh
      1. Change the media (image) file names

        Assuming that the media files are all image files (I don't know how different files might get named), the files will be named "image1.png", "image2.png", etc.

        The plan is to upload multiple summary documents' image files to the same dokuwiki namespace, e.g. "teaching:imageFileName.png". The dokuwiki media manager will want unique names for all of the various files. Therefore, it won't work to leave generic image file names like "image1.png" after converting each individual summary document, because then there will be multiple files with the same name.

        Solution: append a name for the specific summary document to the image file names. Can do this in the ./media/ dir:

        rename 's/.png/_gustation.png/' *.png

        Then will need to change the references to the image files in the dokuwiki .txt document.

        Here are example lines from a dokuwiki .txt document:

        ====== Neurosynth map for the term: ====== {{:./media/image1.png?570x570|Macintosh HD:Users:acate:Downloads:mni_icbm152_t1_tal_nlin_asym_09a_acMasked (2)2.png}}

        Need to

        • Change "image1.png" to "image1gustation.png" (for this summary document about gustation)
        • Change the path name from "./media/" to "teaching:"
        • Remove the image alt text "Macintosh HD:Users … (2)2.png"
        sed 's/.png/_gustation.png/g' -i.bak dokuwikiFileName.txt sed 's/:\.\/media\//teaching:/g' -i.bak dokuwikiFileName.txt sed 's/|.*}}/|}}/g' -i.bak dokuwikiFileName.txt
      2. Remove the table of contents

        If there is one in the .docx document. It will get reproduced as plain text after pandoc conversion to dokuwiki.

      3. Remove question labels

        Remove the "Clarification:" etc. labels, and remove the numbers after the "Q[0-9]+". Remove spaces around the terms, too

        sed 's/\(Q[0-9]*[[:space:]]*\)[Cc]larification:[[:space:]]*/\1/g' -i.bak dokuwikiFileName.txt sed 's/\(Q[0-9]*[[:space:]]*\)[Ii]mplication:[[:space:]]*/\1/g' -i.bak dokuwikiFileName.txt sed 's/\(Q[0-9]*[[:space:]]*\)[Rr]elated:[[:space:]]*/\1/g' -i.bak dokuwikiFileName.txt

        NOTE: In many or all .docx files the questions are in bold text, which will get converted to text wrapped in pairs of "%%**%%" in dokuwiki format.

        To un-bold the question text only, and not all bold text, use this command by searching for the Q[0-9]* labels. It gets rid of the pair of "%%**%%". The "\1" refers to the pattern identified in the first part of the s%%//%%/g statement using $$.

        sed 's/\*\*\(Q[0-9]*.*\)\*\*/\1/g' -i.bak dokuwikiFileName.txt [BUT SEE BELOW; CAN DO ADDITIONAL STUFF IN THIS COMMAND AS WELL]
      4. Make each question a subheading

        [NOTE: AS DESCRIBED BELOW, I DECIDED TO SCRAP THIS IDEA; did continue to set questions to headings, but at a higher level for bigger text. Just seems like a good idea in case want to link to questions in the future.]

        Could do this in MS Word, etc.

        Better to do this after converting to dokuwiki .txt file, by searching for "Q[0-9]+" lines and wrapping them in appropriate number of "=".

        Best to combine this step with the un-bolding step from above. Also remove numbers after the "Q" here.

        sed 's/\*\*Q[0-9]*\(.*\)\*\*/== Q: \1 ==/g' -i.bak dokuwikiFileName.txt

        Then would need to set dokuwiki table of contents (toc) to exclude that level of heading. (This should be the case by default for level 5 headings).

        Using two equals signs ("==") corresponds to a level 5 heading in dokuwiki. The resulting html will be like:

        Q: What is gustatory?
        1. Make list of question section links

          Then could make list of all questions across all pages for different topic, create links to their sections (?).

          Appears that could find text for making links by looking at the html source for the dokuwiki pages, specifically by searching for headings of a specific level:

          Q: What is gustatory?

          The id gets appended to the page URL when navigating, e.g., via the table of contents to a specific section:

          http://visneuro.psyc.vt.edu/doku.php?id=teaching:cndm_topic_gustation#qwhat_is_gustatory

          The conversion of the messy heading text (messy because of whitespace, capital letters) gets converted to a simpler string by dokuwiki.

          Presumably would use xslt to extract the ids and the heading text, and output html that could be embedded in a dokuwiki page.

      5. Tweak the automatically generated table of contents

        This assumes dokuwiki has the plugin "toctweak" installed.

        Append this to top of file to prevent higher-numbered headings from appearing in toc:

        "~~TOC 1-2~~"
    2. Manual changes

      1. Link to zotero library

        After posting the web page (as below), add a link to the zotero library for the discussion at top (just before the first heading "Article Discussed").

        Here is the base URL for the course's Zotero site:

        https://www.zotero.org/groups/2279132/cognitive_neuroscience_decision_making/items?

        For gustation, the link is:

        https://www.zotero.org/groups/2279132/cognitive_neuroscience_decision_making/items/collectionKey/2Z7TGN6D
      2. Add spaces and horizontal rules arouund sections

        Scheme:

        • Horizontal rule (with spaces surrounding) at end of L1 sections
          • Except for first two ("Topic:" and "Article Discussed")
        • Extra empty line ("\\") after L1 headings
          • Except for first two, which also get one before
        • Two extra empty lines before and one after L2 headings
        • Extra empty line before L3 headings (which should always have "Q: " text)
          • Except for first one in section

        Tried using sed, but way too confusing, so used emacs.

        Wrote new section into

        cndm_fix_dokuwiki.sh

        Also have script that does this for a single file. Need to edit input file name in script. Can use to fix existing dokuwiki pages that have already had extensive hand edits, by copying text from page to a temp file, running script on that file, then pasting output back to dokuwiki.

        txt2md_cndm_dokuwiki.el
  2. Make a list of links to individual questions

    Make the list in dokuwiki syntax, so can paste the resulting text into a dokuwiki page.

    Use xlstproc to process the html code of the web page you have just made for the discussion. [Note: this assumes that you have actually done some of the following steps below, which I know is out of order.]

    1. XSL template used as reference for writing this code

      Anthony consulted xls templates he had made previously for other activities to remember how to write xlst code:

      ~/Google Drive/teaching/code/Canvas/text/getQuestionsCNDM.xsl
    2. Format for dokuwiki links to follow

      [Note not typing the double square brackets below, because emacs org mode will display them and their contents as a link in this here file, which is inconvenient.]

      Section headings (the thematic headings created by authors of the summary document) ought to be L2 headings. (There should be a L1 heading "Questions posed by students" to begin the section.)

      [Don't actually type the square brackets below]

      [newline] ****Section Heading text**** [newline] (2x [)Question URL|Question Heading text(2x ]) [newline] [next one] [newline] …

      UPDATE: Too time consuming for Anthony to figure out xslt code for selecting the topic headings, so just make a list of all the questions:

      (2x [)Question URL|Question Heading text(2x ]) [newline] [next one] [newline] …

    3. xsl file

      ~/Google Drive/teaching/code/dokuwiki/makeQuestionLinkList.xsl

      It works great! However, I decided not to use it, see below.

    4. PROBLEM: Links don't take visitor to section!!!

      2019-08-06

      If go to address bar and hit Enter, it will work, otherwise, the location is off!!!!!!

      I tried using "absolute" URLs (not internal dokuwiki page links), but same result!

      Also got spam warning when did the latter, turned off spam blacklisting, which solved the spam problem but not the location problem. Turned spam blacklisting back on.

      Also, the list is overwhelming to skim. It seems like readers might as well browse the actual web page itself.

      DECIDED TO SCRAP THE WHOLE LINKS TO QUESTIONS PROJECT!!

==== Post the dokuwiki page ==== - Create a new link for the discussion on the main course page - Paste the contents of the dokuwiki .txt file - Upload the image files (./media) using the media manager ==== Add search bar to the main web page ==== Can make it specific to the namespace, if have installed the "searchform" dokuwiki plugin. See its page for syntax. ==== Add graphviz graph of production process ==== Wrote graphviz dot code to make a nice graph. Can copy and paste this code to dokuwiki, wrapped in tags: File: ~/Google Drive/teaching/CNDM/website/graphviz/cndm_document_process.gv Code to export to image: dot -Tpng ~/Google\ Drive/teaching/CNDM/website/graphviz/cndm_document_process.gv -o ~/Google\ Drive/teaching/CNDM/website/graphviz/cndm_document_process.png Export to .svg for use of graphic elsewhere: substitute -Tsvg and *.svg in code above, i.e.: dot -Tsvg ~/Google\ Drive/teaching/CNDM/website/graphviz/cndm_document_process.gv -o ~/Google\ Drive/teaching/CNDM/website/graphviz/cndm_document_process.svg ===== PDF ===== ==== Moved course to sub-namespace in teaching namespace ==== Namespace is now "teaching:cndm". Main topic page is now "teaching:cndm:cndm". So that can export entire namespace, and none of the other teaching pages, to a single PDF ==== Make export link using dw2pdf plugin ==== Can use dw2pdf dokuwiki plugin See the plugin page for syntax.