2014-2015 HASTAC Scholars: Call for Applications

Deadline for applications: August 25, 2014
Announcement of Award: September 3, 2014

Are you a graduate student engaged with innovative projects and research at the intersection of digital media and learning, 21st-century education, and technology in the arts, humanities and sciences? Would you like join an international conversation about the digital humanities? If so, you are invited to apply for the opportunity to become a 2014-2015 HASTAC scholar. As a Scholar, you will represent Fordham University at HASTAC’s prestigious, online community. Two successful candidates will each receive a $300 honorarium from the office of the Dean of GSAS.

HASTAC (pronounced “haystack”), which stands for Humanities, Arts, Science, and Technology Advanced Collaboratory, is an interdisciplinary, international network of undergraduate and graduate students, faculty, as well as librarians, archivists, museum curators, publishers, and IT specialists. Members of the HASTAC community blog, host forums, organize events, and discuss new ideas, projects, and technologies that reconceive teaching, learning, research, writing and structuring knowledge. For more information about HASTAC Scholars and to see their discussion forums, please see the HASTAC Scholars website and also this page.

Successful candidates will:

  • Remain in good standing with the university.
  • Give one workshop centered on integrating digital tools into the classroom or research. The workshop will be open to the campus community and given by April 2015.
  • Be an active participant in the Fordham Graduate Student Digital Humanities Group by leading or planning one or more events related to the digital humanities, including workshops, speakers, and/or reading groups.
  • Frequently engage, according to your interests and abilities, in the discussions taking place on the HASTAC website, as well as related events taking place during the year.
  • Between September and May, contribute no fewer than two posts per semester to the HASTAC Scholars blog and to the Fordham GSDH (These may be cross-posted.)
  • Report your activities at least twice a semester to a faculty mentor to be assigned to you.

Applications will be evaluated based on the scholar’s activities in the areas of digital humanities research, pedagogy and technology, and service to the community. Highly motivated students with limited exposure to the digital humanities are encouraged to apply. This opportunity is an excellent way to learn more about digital media and practices.

To make the application, please answer the following the questions:

  • Why do you want to become a HASTAC Scholar?
  • How will being a HASTAC Scholar support your current work at work Fordham? Please speak to this question in terms of both your teaching and research, noting your experience with digital humanities research and pedagogy.
  • What strengths and experience can you contribute to the HASTAC community?

Your application must include a brief recommendation from a faculty member who can speak to your scholarship and ability to collaborate with others, both in person and online.

Send applications and recommendations as Word Documents to Dr. Elizabeth Cornell, cornellgoldw_at_fordham. edu, with “YOURLASTNAME-HASTAC APP” as the subject line. Applications are due no later than 5:00 PM, August 25, 2014. Members of Fordham’s faculty Digital Humanities Working Group will review applications and two scholars will be announced no later than September 3. Selected scholars should make an application at the HASTAC website by September 10. Details for that procedure will follow if you are selected.

Online Profile Management Workshop

This post is a response and reaction to the workshop I led on April 23rd, Your Online Presence: Google, Facebook, and Life Ahead It is not a summary of the workshop, but instead my takeaways from it, particularly my suggestions and questions for anyone interested in leading a similar discussion.

Many DH-savvy people perhaps take for granted the idea of managing one’s online profile — we know that we will be Googled by other scholars, by potential employers, even by potential dates.  As participants in DH projects, we often have content associated with our names that is readily available.

I think it is easy for us to forget, however, that not everyone is as interested in, or as aware of, their online presence: we may assume too high a level of awareness.  I found, when I presented for a class of undergraduate juniors and seniors, that while most of them understood what an online “presence” consisted of, many of them appeared unconcerned about what it contained.

The idea, for example, that someone might lose their job over a picture of drinking posted on Facebook seemed horrifying and almost unbelievable to some of the students.  The idea of generating content intentionally on sites like LinkedIn and a personal blog seemed foreign to many of them, and the idea of using social media professionally (or of employers using/searching Facebook, much less any other social media site) seemed, in some cases, to be quite a bit to swallow.  Other students seemed to already be quite media-savvy, so it was a mixed group: I don’t mean to imply that all of them were surprised.

My biggest question, which I hope we will have the chance to discuss as a group in the fall, but which I encourage anyone to respond to in the comments, is this:

How essential do you consider online presence management?  Does everyone need to worry about this, or only those who are interested in pursuing a more digitally-oriented job?

Mapping Religious Concern in the Later Middle Ages: Software Ups and Downs for DH Visualizations

By Alisa Beer

At the final meeting of the Digital Humanities Graduate Group on April 23rd, Alisa Beer (that’s me) presented “Mapping Religious Concern in the Later Middle Ages.”

Jacqueline Howard followed with her presentation on the Bronx African American History Project and Digital History, which she wrote a blog post about for us. Since Jacqueline already posted about her topic, I will focus on my own presentation’s topic.

Mapping Religious Concern in the Later Middle Ages: Software Ups and Downs for DH Visualizations
My presentation derives from work I did for my MA thesis, Guido de Monte Rocherii’s “Manipulus Curatorum”: the Dissemination of a Manual for Parish Priests in the Fourteenth and Fifteenth Centuries.

The Manipulus Curatorum, or Handbook for Curates, is a text that instructs priests in their duties. It survives in 261 identified manuscript copies, the majority of which are either undated, or dated to the fifteenth century. This is, as medievalists reading this blog will recognize, a very large manuscript survival.

In order to figure out where this text may have been used, or at least, where its manuscripts are currently housed, I created a Google Map, in the fall of 2012. Then I used Microsoft MapPoint, in the spring of 2013 to create a similar map, and finally, in the spring of 2014, I tried CartoDB. The features of each differed, at the points at which I used them, and in this post I will discuss the ways in which each helped me to visualize my data and to get more information out of my spreadsheet of manuscripts in different ways.

Google Maps

This was helpful because it was:

–Easy to learn and to use, if time-consuming,

This was less helpful because it:
–Didn’t handle multiple pins in the same location well
–Did not import spreadsheets at the time I was using it (Fusion Tables has changed all of that!)
–Did not have many display options

Microsoft MapPoint

This was helpful because it:
–Allowed for shading by density of points, which helped me see where the manuscripts were most concentrated.
This helped me to form a better view of where the manual had collected in the years since 1500. This was a fairly transformative realization, since it helped me focus my research geographically in ways that would have been harder had I relied on a spreadsheet and a general sense of how many were in Germany vs. Austria vs. England.
–Allowed for differentiation by features (such as date).
This allowed me to see, visually, exactly how many of the manuscripts were undated vs. fifteenth century, and how very rare the fourteenth-century manuscripts were, though I already knew that, and it wasn’t exactly a transformative realization.
–Imported data from a spreadsheet.
Oh, so lovely not to have to put every pin in by hand, and to be able to update the spreadsheet, re-upload the data, and not have to worry about finding the right pin and changing it individually.

Downsides included:
–A less-than-ideal visual display.
I am not a fan of its graphics. They’re fine, but they’re not appealing to me at all.
–A difficult user interface.
I found it cumbersome to work with, at best. I achieved my goals with it, but only by dint of stubbornness, online searching for help topics, and a good deal of wasted time.
–A very expensive paid version: $299.99, and a slightly hobbled trial version.
Enough said.




I liked CartoDB best of the options I tried because it:
–Has very flexible display options.
This was lovely. I was able to choose colors, map backgrounds, and other options, in order to visualize in the way I found most clear and helpful. This transformed my understanding of how the manuscripts moved, since I could see “only” thefourteenth-century ones, only the fourteenth-to-fifteenth-century ones, etc. I look forward to creating an animation of the spread of the printed editions using CartoDB, because it will be incredibly helpful, compared to a similar animation of the spread of printing in the same time period.

–Imported data from a more complex spreadsheet than MapPoint.
I was able to import my entire spreadsheet and select data displays that were more complex than I managed in MapPoint. This allowed me to differentiate between a wider variety of dates, for example, and to add extra criteria, or otherwise display information that MapPoint and Google Maps were unable to help me with (at the time at which I used them.)
–Was accessible online on any computer.

Downsides include:
–The need to sign up for an account, and limited functions of a free account, including the public visibility of free account data.
This didn’t deter me, but I think it might make some a bit leery. I’m also perfectly happy to have my data be publicly visible, but I know many people are not.
–The need for internet access.
While not always a problem, when my internet went out, I was very unhappy not to be able to use CartoDB at all.
–The cost of a paid plan — the least is $29.99/month.
This is, annually, more than MapPoint. And it’s a subscription service, so you have to keep paying for it.

While I like CartoDB better than the alternatives, I’m still going to keep an eye out for open-source mapping software, and try my hand at Omeka’s mapping options, because I’m not content to pay $29.99/month for the ability to have more than 5 tables. At the moment, I don’t need more than 5, but I’d like to have a better sense of what’s out there before I subscribe to any program.

Thoughts? Comments? Suggestions of other mapping software? All would be more than welcome!

Five Digital Tools for Pedagogy and Research

As the academic year tumbles to a close, I would like to use my final blog post to discuss five tools that have made the past semester a little less precarious. Certainly, there are more advanced tools available, and I hope you will share them via the Comments thread. However, for this post, I simply want to focus on tools that I regularly use and rely upon to save time and frustration (if only a little).

ImageA no-nonsense shortcut utility for the Mac, TextExpander ($25 edu) has taught me the virtues of automation. Designed around shortcuts (Abbreviations) and the texts they expand (Snippets), TextExpander allows you to supplement your desktop’s keyboard shortcuts and to build forms using a bevvy of customizable templates. These templates can be as simple as custom email signatures (available system-wide), tools for validating and truncating URLs (Internet Productivity Snippets), or, my favorite, Fill-ins, with which you can create forms around predefined selections (Popups) and open fields (Fill-ins). Fill-ins have already saved me hours in writing midterm and final grade reports. (Thanks to TextExpander’s Statistics feature, which tracks and visualizes usage, I can report that the utility has saved me more than a dozen hours this semester). Once I started thinking about my reports in terms of what could and could not be automated, I realized that much of what I write in my headers and footers can be accomplished using three or four different forms. By creating several such forms, I allow myself more time for rigorous reflections on student work.

ImageWhen it comes to managing that work, TurnItIn (pricing depends upon your institution) has changed the way I evaluate research papers. After discovering that Fordham offers a free license for educators, I decided to use it for my students’ first batch of longer essays. I had noticed that despite our conversations about integrating and introducing research, many of my students were playing it fast and lose with outside sources, and I feared that there might be instances of academic dishonesty. Previously, when I harbored such concerns, I manually searched for phrases that sounded misplaced, a time-consuming and incomplete process. With TurnItIn, you can either ask your students to submit papers through the website or you manually upload files. After a couple of minutes per paper, the site calculates the essay’s originality, providing a deceptively specific percentage of that paper’s derivation (given that it doesn’t parse quotations, it’s advisable that you not take the score too seriously and comb through the document yourself). I found that I could open each essay and see where it might be derivative. TurnItIn annotates the document and provides access to the sources of those annotations. (If the source isn’t available publically, TurnItIn allows you to request access). This semester TurnItIn helped me identify two instances of academic dishonesty, which, on my own, would have required hours of additional searching

ImageMy biggest time saving tool, however, has nothing to do with my teaching. When it comes to tracking, managing, and sharing citations, I cannot sing more loudly my praises of Zotero (free). Zotero lives where most of us begin our research: the web browser. While it was originally designed for Mozilla FireFox (where it still works best), you can also download the standalone application, connectors for Google Chrome and Apple Safari, and word processor plugins (for Microsoft Office and LibreOffice/OpenOffice). With JSTOR and WorldCat, Zotero boasts particularly thorough integration; of search results you can tick off articles whose bibliographic information and content (such as full-text PDFs) you want to save. You can access citations via the Zotero button in your browser, where you can create folder hierarchies, export citations in just about any style, or view downloaded PDFs. If you register for Zotero’s free synchronization service (Zotero Sync), you can even access your citations on other computers. By default, you’ll get one hundred megabytes of storage, more than enough for citations alone, but somewhat miserly for PDFs. (After archiving twenty journal articles with PDFs, I exhausted more than a third of the allotted storage). If, however, you intend to conduct your research from one desktop, however, your machine’s hard drive is your only limitation.

ImageFor stubbornly material resources, the latest version of Delicious Library ($25) tidies up everything on your bookshelf, including books, movies, albums, software, and gadgets. Adding items is easy. If it has a barcode, you can scan it using your desktop’s iSight camera or a barcode reader. I added the vast majority of my books using my MacBook’s webcam, but for editions that predate barcodes, I found that I could add them using keywords, authors, and titles. In addition to acting as my iTunes for everything outside iTunes, Delicious includes a particularly dulcet feature: Loans. You can check out books (or anything else) to anyone in your Contacts, and thanks to the software’s interoperation with Apple Calendar, you set and track due dates. Given that I’m constantly swapping books with friends, this feature applies as much to me as it does my peers. (You don’t keep friends by absconding with their books).

ImageI’ve saved my final tool until the end because it is a summer project unto itself. If Apple’s Pages is Microsoft Word with a fresh coat of paint, Literature & Latte’s Scrivener ($45 with a 15% edu) is a gut renovation. Based on the premise that long texts (e.g., chapters, dissertations, monographs) are comprised of short texts, Scrivener allows writers to collect research, write in smaller, modular texts, and to compile fragments into cohesive manuscripts that can be outputted in just about any imaginable format (form Word docs to ePubs). Whether you’re working on a novel or a recipe collection, Scrivener offers a template; alternatively, you can start Blank, just as you would a Word doc. The interface has three main components: the document at the center (Editor); metadata to the right (Inspector); and text hierarchy to the left (Binder). In the Binder, you can store drafts or just about any kind of research (PDFs, images, documents). Everything in Scrivener can be manipulated using drag and drop operations and a host of different views and layouts. What I like about Scrivener is that is meets me where I am. Although my diss chapter will become a single, continuous text (fingers crossed), it doesn’t start that way. Instead, Scrivener allows me to work in small chunks of text (paragraphs, sometimes more), and to synthetize them into a Word doc suitable for my readers.

A Global Voice: The Bronx African American History Project and Digital History

by Jacquelyne Thoni Howard
PhD Student, History Department, Fordham University

       In August 2013, I was assigned as the graduate assistant to The Bronx African American History Project (BAAHP). The project provides a digital voice to African Americans who lived and contributed to the Bronx, and it seemed an exciting opportunity to me. After the first meeting with Dr. Mark Naison, Professor of African and African American Studies and founder of the BAAHP, I concluded this was no usual desk job. My project responsibilities include managing the digitalization of the oral history interviews and the student workers who have been hired to help.  Dr. Naison directed me to a cabinet which stored the entire project and suggested I start with an audit.  Keeping this in mind, I created a 10-month project plan, and started the audit. From there, the challenges began.

Finding Solutions to the Challenges

The Media. When conducting the first audit, I found a plethora of media in different formats. This cabinet, a time capsule of the media era, illustrated how the project adapted over time to new media. Thus, VHS, cassette tapes, mini-cassettes, CDs, DVDs and digital media via iTunes housed the approximately 300 interviews of Bronx residents. Many of the interviews existed in different, and duplicate formats. Summaries and transcripts, if they existed, had been saved in different places and formats. I could not begin the digitization until all the material—almost 900 pieces—was first placed in a logical order. I created an organizational system using Microsoft’s SkyDrive, which allowed me to inventory each item and its components (interview, transcript and summary) for future upload into the library’s Digital Commons, a digital research repository. In the past six months, our student workers replaced or created missing summaries and transcripts, and readied over 30 interviews based on establishing and creating missing components.

The Platform
. I worked with Fordham University’s Walsh Library at Rose Hill to set up a space for the collection in Digital Commons. This digital repository stores media and document sources for research purposes. The library staff helped set up the site and customized it with project bios, pictures, folders, and established naming conventions. But herein lies another challenge. Whenever I need to make a change to the project, I have to go to through the library staff. Sometimes it may take weeks to make changes because the staff has many other obligations in addition to managing Digital Commons

Management. A large, digitized project such as The Bronx African American History Project presents the challenge of needing an overseer with both project and people management skills. This project provided opportunities to hone my project management skills, and to improve my people management skills. As a project manager, I ask questions and provide answers, and track benchmarks. As a manager of people, I’m concerned with the daily interactions of the workers. I also conduct meetings, manage work times and assignments in person, via phone, email and Google+.

The Workers. I confront another problem involving the constant rotation of student workers and graduate assistants. A good project plan helps the next group pick up the pieces in an environment with an revolving workforce of people. Therefore, I create clearly defined turnover reports and attempt to organize the project for easier turnaround. However, establishing standard operating procedures and organizational systems takes time away from the actual production of digitalization.

Also, workers and assistants need to be trained on how to utilize the technology to load the media sources consistently. In other words, I need to determine beforehand within the system how and where to place digitalized information so that it is coherent. I train students to place information in specific, though sometimes illogical places (For example, we place the full name in the First Name data field box. By manipulating data fields to meet a certain criteria, our project achieves a certain look. However, this approach challenges the accuracy of the project now and in the future.) To ensure quality, after students load the interview into Digital Commons, I check it before publishing to the web.

Additionally, each of the student workers possesses different strengths and interests. I provide them with additional responsibilities to keep their motivation up, which sometimes ends up stalling the project. This experience, however, reaffirms that it is okay to take risks to empower workers, and to refocus the project when the risks do not take the intended path. Whenever I refocus, I propel the project forward by revising my project plan and in the process created a better organizational system for the next graduate assistant.

Other Challenges. Other challenges include departmental researchers, such as faculty and graduate students, who place holds on interviews they do not want published. This presents additional questions regarding how to organize this additional data set. The volume of the interviews and corresponding media makes these holds difficult to honor. This makes organizing media more challenging, considering that many media components, such as transcripts, summaries and the actual interviews, are missing. To combat this problem, the student workers load everything that is ready, regardless of its “hold” status, but we do not publish it until the hold is lifted. This approach allows for the time-consuming work to be completed upfront even though the students’ labors are not seen right away.

The last challenge deals with advertising the repository for future researchers. The answer lies with social media, Google and the BAAHP website. By posting strategic messages within these media formats, we connect followers to the repository. Additionally, googling The Bronx African American History Project, or specific interviewees’ names, also provide access. I am still considering other ways to communicate the repository beyond these outlets. Promotion suggests a common problem in digital archiving. How do institutions with digital collections indicate what they have available so that researchers have access to ongoing digital additions? Is posting a catalog on the institution’s website enough?

Linking Challenges to a Larger Discourse in Digital History

     For digital history projects, careful planning such as creating project plans, entrenching organizational systems, and authoring current and future communication strategies, are essential. However, student workers need proper management from the graduate assistants, by providing them with smaller chunks of information and directions, and taking risks in order to motivate and empower them.

Using the BAAHP as a case study, digital archiving emerges as an important field. While it is true that research methods change as researchers bring the digital age to the field of history, the actual logistics of creating access to research materials also need to adapt. Archivists and institutions must force this change through careful planning. Digital archiving means housing sources so that materials are more accessible, providing tools that makes research more efficient, less costly and time-consuming, and keeping history meaningful in a world where values are changing.

The BAAHP provides an accessible voice to a group of people who might not possess other communication outlets within a historical context. It provides valuable research information to researchers. Additionally, this project provides an example of how to manage a large-scale digitization project. Lastly, it also contributes to the on-going discourse questioning the challenges, goals, risks, rewards and ethics of digital history.

digital humanities + pedagogy: a companion

Imagevisualization: a glossary

  • metadata: information about information. You’ll use metadata to describe attributes of your items, like their dates of creatine, sizes, etc.
  • Dublin Core: a specific kind of metadata used by Omeka. Dublin Core keeps descriptions consistent.
  • omeka.net (compared to Omeka.org): there are two kinds of Omeka sites. Omeka.net is already hosted, meaning that you don’t have to install anything. Omeka.org is customizable, but you have to be comfortable installing Omeka on a server.
  • GIS (Geographic Information System): an integration of hardware, software, and data designed to capture, store, manipulate, analyze, manage and present all types of geographical data.  GIS is a “tool-centric” approach to digital mapping.
  • neogeography: combines the complex techniques of cartography and GIS and places them within reach of users. Neogeography is a “user-centric” approach to digital mapping.

    gamification: a glossary

  • achievements: accomplishments such as exemplary performance on specific assignments
  • add-ons: expansion packs that adds new material, such as additional stories, new areas of exploration, new items, or more levels.
  • avatar: the character a player directs through a game.
  • dungeons: complex spaces that must be explored before they can be unlocked.
  • Easter egg: a hidden feature of a game
  • extrinsic motivation: motivation by means of tangible rewards or pressures, rather than pleasure (e.g. intrinsic motivation).
  • game choice: the pedagogical model. Justin Hodgson argues that, more than course readings, game choice shapes “content, structure, assignments, [and] activities” (50). Hodgson borrowed his Quest Line structure from World of Warcraft, whereas Benjamin Miller modeled his class on the gameplay of Legend of Zelda.
  • Game Master: the person who acts as an organizer, authority of rules, arbitrator, and moderator for a multiplayer game. Lee Sheldon discusses how a Game Master isn’t necessarily synonymous with a Game Designer: “A teacher need not be a game designer to be a Game Master, but, in most case, must be the Game Master. Just as you are in charge of the development, you continue to be in charge of the classroom” (220).
  • gamification: the application of game mechanics to non-game activities.
  • gild: a community in a role-playing game. Guilds can be comprised of any number of players, depending upon common goals and play style of the game.
  • hack: a clever or quick fix to a problem. It can also refer to a modification that allows users access to otherwise unavailable features (which might give the player an unfair advantage over opponents). McKenzie Wark wrote the book (A Hacker Manifesto) on it.
  • level chart: allows players to see one another’s scores. Scores are sometimes posted anonymously; other times, Game Masters use avatars to induce competition.
  • postmortem: a game development term, borrowed from medical examiners. In the game development, a game’s features, design, and development are autopsied. In a class, the corpse is the class itself. Teachers often schedule a postmortem for the last session to create a space for conversation about what worked and what didn’t.
  • quest: the levels through which players acquire skills.
  • random factor: ensures that the outcomes of player choices are not always predictable. Some teachers integrate dice to randomize groupings of students into GILDS, for example.
  • scaffolded instruction: guides students from smaller, simpler skills, to larger, more complicated ones.
  • stuckpoints: a term coined by Peter Elbow as a way of recasting failures as essential to writing development.
  • walkthrough: a step-by-step guide to obstacles, puzzles, or bosses. Players create walkthroughs for other players.

    gamification: related concepts

  • active learning: “experiencing the world in new ways, forming new affiliations, and preparation for future learning” (Gee 23).
  • critical learning: “learning to thinking of semiotic domains as design spaces that manipulate us in certain ways and that we can manipulate in certain ways” (Gee 43).
  • semiotic domains: “any set of practices that recruits one or more modalities to communicate distinctive types of meaning” (Gee 18).
  • multimodality: “In multimodal texts (texts that mix words and images), the images often communicate different things from the words. And the combination of the two modes communicates things that neither of the modes does separately. Thus, the idea of different sorts of multimodal literacy seems an important one. Both modes and multimodality go far beyond images and words to include sounds, music, movement, bodily sensations, and smells” (Gee 14).
  • disruption: somewhere between intervention and subversion, a “creative act that shifts the way a particular logic or paradigm is operating” (Flanagan12)
  • intervention: a specific types of subversion that relies upon direct action and engagement with political or social issues, a “‘stepping in,’ or interfering in any affair, so as to affect its course or issue’” (Flanagan 11).
  • subversion: “the turning of a thing upside down or uprooting it from its position; overturning, upsetting; overthrow of a law, rule, system, condition” (OED).

    gamification: a bibliography

  • Alberti, John. “The Game of Reading and Writing: How Video Games. Reframe Our Understanding of Literacy.” Computers and Composition 25.3 (2008): 258–269. Print.
  • Bogost, Ian. “Gamification Is Bullshit.” Ian Bogost. Web. 8 Apr. 2014.
  • Bogost, Ian. How to Do Things with Videogames. Minneapolis: University of Minnesota Press, 2011. Print.
  • Colby, Richard and Rebekah Shultz Colby. “A Pedagogy of Play: Integrating Computer Games into the Writing Classroom.” Computers and Composition 25.3 (2008): 300-312. Print.
  • Dignan, Aaron. Game Frame: Using Games as a Strategy for Success. New York: Free Press, 2011. Print.
  • Elbow, Peter. Writing without Teachers. New York: Oxford University Press, 1998. Print.
  • Flanagan, Mary. Critical Play: Radical Game Design. Cambridge: MIT Press, 2009. Print.
  • Hodgson, Justin. “Developing and Extending Gaming Pedagogy: Designing a Course as Game.” Eds. Colby, Richard, and John Alberti. Rhetoric/ Composition/play through Video Games: Reshaping Theory and Practice of Writing. New York: Palgrave Macmillan, 2003. 45-62. Print.
  • Gee, James Paul. What Video Games Have to Teach Us about Learning and Literacy. New York: Palgrave Macmillan, 2007. Print.
  • Juul, Jesper. The Art of Failure: An Essay on the Pain of Playing Video Games. Cambridge: MIT Press, 2013. Print.
  • McGonigal, Jane. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. New York: Penguin Group, 2011. Print.
  • Miller, Benjamin. “Metaphor, Writer’s Block, and The Legend of Zelda: A Link to the Writing Process.” Eds. Colby, Richard, and John Alberti. Rhetoric/ Composition/play through Video Games: Reshaping Theory and Practice of Writing. New York: Palgrave Macmillan, 2003. 99-112. Print.
  • Sheldon, Lee. The Multiplayer Classroom Designing Coursework as a Game. Boston: Course Technology/Cengage Learning, 2012. Print.
  • Kapp, Karl M. The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. San Francisco: Pfeiffer, 2012. Print.
  • Wark, McKenzie. A Hacker Manifesto. Cambridge, MA: Harvard University Press, 2004. Print.

Of Lobotomy, Narrative, and Interface

ImageWhen I registered for a dinner discussion with Miriam Posner at Fordham University’s Lincoln Center campus, I did not expect brains would be on the menu. Posner’s talk, an aperitif to her forthcoming book, Depth Perception: Narrative and the Body in American Medical Filmmaking (under contract with University of North Carolina Press), lingered on early twentieth-century lobotomies, as participants raised pointed questions about the process and documentation of Walter Freeman’s many lobotomies (reference The Lobotomy Letters or The Lobotomy Files). Yet, despite the curiosity of these surgeries, lobotomies provided but a frontal lobe to Posner’s expansive presentation. As the title of her talk suggests, “Thinking Through and with Text: Designing Digital Humanities Scholarship for the Screen” was as much about how scholars today re-present scholarship via electronic platforms as it was about how surgeons captured and represented bodies in medical films. With this post, I want to raise two related questions that, in the second half of the conversation, divided participants: What should a user interface do, and what is its relation to a narrative?

Posner discussed a number of existing digital projects that showcase the “affordances and opportunities of digital publishing.” She described the Negro Travelers’ Green Book Map in relation to three main considerations: sources, processing, and presentation. Beginning with the Green Book, a directory of “safe” destinations for African American travelers during the Jim Crow era (source), scholars scanned, geo-located, and built a database of destinations (processing), and mapped and made that data searchable (presentation). While the Green Book Map was a crowd-pleaser, subsequent projects tested the audience’s open-mindedness about interface design. For example, Posner introduced the multimodal journal Vectors. In concept, the audience embraced the proposition (judging by head nodding), but as soon as Posner opened the Vectors editors’ statement, brows furrowed. Here was a page that purported to speak (a statement of intent), but that required the user to pose a question (a keyword search). What did readers have to do to access the entire statement? Whereas some members of the audience rejoiced in the “problem” that the code suggested, others simply wanted to read to the statement. A glance at N. Katherine Hayles’ Narrating Bits exacerbated the divide. One participant asked, “Who uses it?” “What if interfaces aren’t for use, but for something else entirely?” Posner rejoined.

We discussed several other projects that underscored the diverse uses of digital interfaces. Whereas The New York Times’ Snow Fallemploys an immersive interface that absorbs the reader in a multimedia report on an avalanche, Eric Loyer’s Freedom’s Ring (built in Scalar, the new Vectors’ CMS) enables readers to either follow a prescribed narrative or chart their own paths through its nodes. The defamiliarizing interface of Whitney Trettien’s Plant -> Animal -> Book, meanwhile, requires readers to explore content—and the act of reading—associatively.

If one takes seriously the proposition that user interfaces are more than transparent views of content (Johanna Drucker), Posner’s talk underscores the potential of interfaces to function as windows, walls, mazes, and gateways. I want to think about the relation of interfaces to narratives. Like many students of the humanities, I enjoy a good story. The question is whether writers or scholars ought to, given the availability of flexible electronic platforms, enable readers to construct their own narratives by means of different interfaces.

In our conversation with Miriam Posner, several participants argued interfaces are inherently coercive because they require somatic engagement with prescribed routes (e.g., Scalar’s linking and forking paths). However, I fail to see how the narrative of a digital text is any more coercive than that of certain print novels. The frustration that participants expressed about their inability to read the Vectors editors’ statement is not unlike from the frustration readers value in difficult novels such as Gravity’s Rainbow, Finnegan’s Wake, and Pale Fire. We prize the challenges of those novels and how they coerce us into becoming conscious of how we read.

The issue, as I see it, is not that digital texts are inherently more coercive than print counterparts, but rather that they provide an illusion of control. This issue is only a problem if the reader is recast as writer. Electronic interfaces are worth evaluating critically because they enable writers to cast the seedlings of multiple simultaneous narratives. Interfaces that allow readers to chart different paths through content (such as those built with Scalar) may not allow readers to inscribe their own narratives, but they enable readers to discover other narrative germinations. Those discoveries, coerced as they may be, belong to readers in much the same way as does understanding wrested from an oblique print narrative. In this context, perhaps interface will entangle with narrative and the act of reading, its form, akin to the human brain, replete with unseen passageways, unexpected barriers, and unforeseeable possibilities.


Get every new post delivered to your Inbox.

Join 35 other followers