Thursday, September 20, 2012

Overview: The Design of Everyday Things


        As the title suggests, this is a post about my impressions of the book The Design of Everyday Things by Donald A. Norman.

Chapter by Chapter Reaction Breakdown:
  1. "The Psychopathology of Everyday Things": Chapter one dealt mainly with bad design and the keys to good design. The bad designs included most door opening mechanisms (I finally know why I struggle to open some doors!), a refrigerator thermostat, and an office telephone. I enjoyed learning the reasons that some designs were bad, but what was more interesting was the way Norman categorized the necessities of good design into visibility - the need for the mental map to match the design map by providing good clues to the operation of the object, mapping - how physical buttons or switches interact with the object's operations, and feedback - how the object tells the user an action has occurred. These three design aspects are very general, but are great things to remember when designing anything or evaluating a design.
  2. “The Psychology of Everyday Actions”: Chapter two dealt mainly with the false blame people put on themselves and the seven stages of human action. The former is a bit eye opening because I have definitely had my moments of just not being able to figure out the operations of a ‘simple’ object and realizing that most (well, probably only some) of those objects had bad or even terrible designs is reassuring. The latter was interesting in that it put words to my actions, whether conscious or unconscious, that led to completing a formulated goal.
  3. “Knowledge in the Head and in the World”: Chapter three dealt mainly with the distributed and imprecise knowledge of humans and how it is stored both in the brain and in the world and the natural mapping of objects that comes out of the relationship between memory and environmental clues. Learning about the general inner workings of memory is very cool and helps me understand (and even exploit) how the brain works and how knowledge is stored. Natural mapping seems like a good way for designers of an object into instantly piecing together the visible design aspects of an object and learning its use based on the relationship between memory of other, similar objects and clues from the world.
  4. “Knowing What to Do”: Chapter four dealt mainly with object constraints and how they relate to everyday objects. Object constraints really put into words what I know about design and that good designs take into account physical, cultural, and semantic constraints. It also made me realize that, subconsciously, I knew these things about everyday objects. Applying and evaluating these constraints on objects really hit home because I have encountered these good designs, but never thought about what made them so good.
  5. “To Err is Human”: Chapter five dealt mainly with human error and the memory trade-offs that our brains make in order to store and associate massive amounts of data. I liked that Norman categorized the types of mistakes I make, but I did not like how many mistakes there were that I make all the time. It is nice because now, whenever I encounter a problem, I can realize what I am doing and try to fix the erroneous behavior as fast as possible, instead of perpetuating the problem and potentially waste a large amount of time and cause embarrassment.
  6. “The Design Challenge”: Chapter six dealt mainly with the design and pitfalls of the keyboard and faucet and how they came to be as well as the fact that most designers are not users, nor are some designers' clients and that that plays a very important roll in the initial failures of many technologies. After reading this chapter, I realized that Norman has so far been very good at predicting future technology (or is today's technology a direct result of his predictions?)
  7. “The Design Challenge”: Chapter seven dealt mainly with the seven principles for transforming difficult tasks into simple ones and how intended usability affects design. This being the final, summation chapter, it was very interesting to see Norman giving readers a sort of checklist to follow for good design and how to get there. It was also nice to read his final thoughts on the matter and gently urge us to support good design.
Overall Reaction:

        Overall, this book was an eye-opening endeavor into not only the design aspects of everyday things, but also the reasons behind those aspects, namely the way the human mind works. This book was an excellent and easy read that presented ideas in a clear, coherent manner, but also sounded very "80's" (which makes sense given the fact that the book was published in 1988). The book was very sequential in it's findings, only going back to previous designs as a reference or base line when presenting new facts, which makes it easier to follow and proves invaluable to the understanding of some of the concepts.
     
        The most chapter for me was chapter three, because it opened my eyes and made me realize that my brain works a certain way (or can be thought of to work a certain way) that logically makes sense and I can see it in my everyday life choices and memory recollections. The aggregated data sort of 'theory' (it's not actually called the aggregated data theory) really put in to words exactly what I was thinking as I was reading how the brain stored pictures and events. The fact that outliers become more important (equal in importance to a conglomeration of the mundane and therefore individually more important) is almost, in and of itself, the reason that many humans seek new adventures and experiences and why we remember the outlier memories more readily than the everyday.

        I also enjoyed chapter two because it gave me hope for myself and relief from the self-placed burden of stupidity I have when trying (and failing miserably) to use certain everyday objects. I would not go so far as to place myself in the 'learned helplessness' category because if something needs to be done, I'll do it, but there are a few things (mostly ambiguous doors and long rows of unlabeled light switches) I tend to avoid and/or evaluate very carefully before proceeding to interact with them when going through life.

        I found chapter six to be interesting because it talked about how the modern-day keyboard came about (which, of all the designs talked about in the book, is the most used by me) and keyboard shortcuts, which I absolutely could not be productive without (okay, that's a bit of a stretch, but I do love shortcuts). It also bad-mouthed Apple's Macintosh a bit, which is refreshing to see because he speaks objectively instead of spinning everything in Apple's favor (Apple's extraordinary marketing team of the past ~20+ years clearly did not exist at the time).

        In conclusion, this book was an extremely informative and really made me aware of the way my mind works in combination with the subtle design enhancements that seemingly magically allow me to 'work' an object almost immediately. It also serves as a good tool to make the reader think about everyday items in the same way as the author and take their findings and apply it to their future designing and buying habits. 9/10, would read again.

5 Good Designs: 
  1. Bike Pedal System: 
    • Visibility – High visibility as the pedals and gears are uncovered and clearly connect in a meaningful way. The pedals also turn in the same direction as the wheels.
    • Natural Mapping – 1-to-1 mapping of the pedals.
    • Feedback – Great feedback. If you're doing it right, the bike will move in the direction pedaled (assuming a simple, 1-gear bike, not pictured)
  2. Microwave Controls: 
    • Visibility – Low visibility as to how it works, but the descriptions on the buttons easily clue a user in to what it does. 
    • Natural Mapping – Great mapping as each button matches to a function or number (i.e. only does one action) and the higher functions are mapped directly to individual buttons as well.
    • Feedback – Amazing feedback, both visually through the small LED display and the light that a running microwave emits during use and auditory through the sound of the microwave coming on.
  3. Pipe Valve:
    • Visibility – High visibility from the way the valve handle is positioned. Users can easily infer that in order for something to flow everything must line up.
    • Natural Mapping – Great, 1-to-1 mapping.
    • Feedback – Somewhat limited unless there is a sound in the pipe when a fluid or gas passes through the valve or pipe line.
  4. Classroom Projector Controls:
    • Visibility – While all the buttons are the same, square buttons, the visibility is still somewhat high due to the descriptions/symbols located directly on the buttons. 
    • Natural Mapping – Great 1-to-1 mapping for all the functions.
    • Feedback – Good feedback from physically seeing the projector screen lower, hearing/seeing the projector come on, having the laptop screen show on the projector, etc.
  5. Computer Speaker Controls:
    • Visibility – While not being able to see how a speaker works, the knob/button descriptions provide great clues as to the operation of it.
    • Natural Mapping – Great 1-to-1 mapping for all controls.
    • Feedback – Great feedback from changing sounds (assuming the speaker is connected to a device that is playing a sound) and an LED that lights up when the speaker is turned on.
5 Bad Designs:
  1. Trash Can on Texas A&M Campus:
    • Visibility –  Low visibility. From the book and life, we know that horizontal bars are meant to be pushed, but this bar is meant to be pulled. From the rounded bottom of the 'door', we can also assume that it holds something that it has to have for operation (which it does not). Basic case of system image - conceptual model mismatch.
    • Natural Mapping – Being the tact that it is a sort of door, the control is mapped 1-to-1.
    • Feedback – Great feedback in that a user can tell when the door is open or closed by whether or not you can peer into the trash or not.
  2. USB Flash Drive:
    • Visibility  High visibility, because the user knows that the male end of the USB goes into the female end in order to work. Problem is, there is no clear indication of how the USB should be oriented in order to plug it in (i.e. which side goes up).
    • Natural Mapping  1-to-1 mapping as there is only one way to plug it in, but you get it wrong EVERY time.
    • Feedback It either goes in or it does not, so physical feedback. Some USB's have an LED light built in for when they are connected or in use. 
  3. Computer Monitor Controls:
    • Visibility  Uniform row of buttons that are either not clearly marked or not marked at all, so visibility is very low (for my personal monitor, not pictured, the power button is the ONLY button labeled).
    • Natural Mapping  Pretty bad. Other than the power button, most buttons are not 1-to-1 mapped and the system is often quite convoluted and non-intuitive with all the work being done in a menu that is difficult to navigate.
    • Feedback  The monitor's on-screen menu will reflect any changes, or the monitor will shut off or turn on, so it does well in this department.
  4. Digital Camera:
    • Visibility Very low visibility. There are all kinds of knobs, sliders, and buttons that have very little visible description and will leave users baffled as to what each one does (although it looks like there was a physical constraint that required such small/odd controls).
    • Natural Mapping Mostly 1-to-1 mapping of controls, so it does well in this department.
    • Feedback  Decent feedback as digital cameras usually have a small screen on the back, so all or most of the functions will be displayed on screen.
  5. Mattel's Intellivision Game Console Controller:
    • Visibility Low visibility, because regardless of control scheme for a game, a user has no idea what an intuitive control scheme would be because it is just a number pad and spin wheel.
    • Natural Mapping Assuming a game uses a reasonable control scheme, the mapping would be ~ 1-to-1, so optimal.
    • Feedback Good feedback, because a television set would provide on-screen feedback assuming the button/wheel is mapped to a control.

Thursday, September 13, 2012

Minds, Brains, and Programs (Chinese Room thought experiment)


         Taking a break from the technical papers, I am going to discuss Minds, Brains, And Programs, a paper written in 1980 by John R. Searle at the University of California - Berkeley. 

         First off, the thought experiment that Searle proposes is one related to an artificially intelligent machine's true understanding of anything from a psychological standpoint. He uses a proposed experiment (the Chinese Room Experiment) as the basis for his argument. In this experiment, a person who does not understand any Chinese lettering symbols would receive an input of Chinese characters into a closed room. Then, based on a set of instructions, written in English, the person's native language, the person would manipulate the Chinese symbols into an output. The idea was that this person could have a conversation in Chinese with a native Chinese speaker and the native speaker would not be able to tell that the person did not know Chinese. His point was that machines, at least at this point in time, were only able to manipulate data based on a set of instructions, but could never actually understand what the data or symbols really meant. His experiment is Turing complete in this sense, because a Chinese speaker would not be able to tell which room had the computer and which room had a person, since both the human and computer were manipulating symbols based on the exact same set of rules (translated into computer language, of course).

        Searle continues his rant paper by explaining the differences between weak and strong AI, which is a useful distinction as weak AI is merely a tool that just performs a set of instructions, where as strong AI actually understands the instructions it is performing. His argument is that an appropriately programmed computer cannot literally have cognitive states or intentionality, at least in the current (1980) design of computers. He spends the rest of the paper ignoring logic and answering criticisms in a blatantly biased manner. 

        He continues his paper by talking about how he believes that understanding is black and white; that you either understand or you don't. This is not the case, as shown by his critics' argument that a human who understands English can also partially understand French and to a lesser degree German, but understands nothing about Chinese. This argument holds true in my book because it applies to me directly and the 2-state definition of understanding doesn't make any sense in my case. The first argument against this thought experiment hits the nail on the head. It states that while the person in the room manipulating the Chinese symbols does not understand them, the system as a whole does. His rebuttal is that if the person internalizes the rules for symbol manipulation, then he has encompassed the whole system, yet still does not understand Chinese. My response is that whoever made the set of rules must be included in the system for the system to function, and, therefore, the system as a whole would understand Chinese. The most interesting point against Searle's argument is the "many mansions" reply that proposes that with sufficient technology, it is possible to make a machine with cognition and intentionality. Searle agreed with this proposition, but undercut it by stating that if definitions change, then it is impossible to answer the original question. He ended by answering some questions about his beliefs that AI cannot progress enough to the point that a program could ever give a machine intentionality, cognition, and understanding. 

       Overall, this paper was interesting and I think that, while wrong, Searle did advance the philosophical aspects of AI and helped to open up a world of research into human psychology and artificial intelligence.

Monday, September 10, 2012

Paper #4: Recipe Medium with a Sensors-Embedded Pan

        For my 3rd paper review, I chose Panavi: Recipe Medium with a Sensors-Embedded Pan for
Domestic Users to Master Professional Culinary Arts, a paper co-authored by Daisuke Uriu, Mizuki Namai, Satoru Tokuhisa, Ryo Kashiwagi, Masahiko Inami, and Naohito Okude. This paper was presented at CHI 2012 in Austin, TX and a full list of this paper's references can be found here under the 'References' tab.

Author Bios:


TL;DR (Summary):

        The team discussed Panavi, a sensors-embedded frying pan that is wirelessly connected to a computer system that shows text messages with sounds, analyzes sensor data and user conditions, and provides the user with instructions. Panavi is designed as a way to teach users be expert chefs in a domestic environment without much prior experience. It utilizes projected images, LED indicators, and vibration to interact with the user.

        The team then discussed the design process they used to make the pan, including the sensors and related system used to allow users to cook with little or no prior experience. The system has so far included how to prepare pancetta and carbonara, but could be extended to other recipes as well. The user study the team performed included 4 beginner and intermediate cooks using Panavi in a simulated kitchen environment and concluded that the system was beneficial to each person, but that some of the users had problems understanding instructions or picking out the important details in the recipe.

        The team concluded that the Panavi system was a success and could be applied to other menus, though it needs a few tweaks before it is ready for public use.

Related Works Not Referenced:

  1. Smart kindergarten: sensor-based wireless networks for smart developmental problem-solving environments - talks about imbedded wireless sensors in children's toys to enhance learning. Relevant in that it is an embedded sensor that is used to enhance learning, though not related to cooking.
  2. Cooking procedure recognition and inference in sensor embedded kitchen - talks about an algorithm that shows users instructional videos based on sensory input determining which step the user is at in a recipe. Relevant in that it is trying to teach a user to cook, but does not provide anything more than instructional videos.
  3. Development of a wearable motion capture suit and virtual reality biofeedback system for the instruction and analysis of sports rehabilitation exercises - talks about a wearable motion capture suit that instructs users in sports rehabilitation exercises. Relevant in the fact that it uses a sensor-embedded suit to help teach users to perform a task, but does not go beyond that.
        There was not much other related work that was relevant to this paper, but the works described above were ways to instruct a user to do a task using a 'smart' object. This paper chose to make a 'smart' frying pan which the team did successfully and better than the other cooking instruction paper.

Evaluation:

        The team used a qualitative method of evaluation by getting to know the users involved and gathering user input at the end of the experiment. They also used a somewhat objective method of evaluating the final food product from each user by comparing it to a 'perfect' example of the dish. The test was systemic because it tested the use of the entire system to determine if it worked.

Discussion:

        This technology is very interesting because it would allow someone with minimal cooking experience to cook on a reasonable level. It would probably need to have a 'beginner' mode (along with other modes) that would go into more or less detail and highlight the more important aspects of each recipe before it could be taken to market. This is a novel idea in that it combines sensor-embedded objects to facilitate cooking instruction, rather than just video or written instruction.

Thursday, September 6, 2012

Paper #3: Implanted User Interfaces

        For my second paper review, I reviewed Implanted User Interfaces, a paper co-authored by Christian Holz, Tovi Grossman, George Fitzmaurice, and Anne Agur. This paper was presented at CHI 2012 in Austin, TX and a full list of this paper's references can be found here under the 'References' tab.

Author Bios:
TL;DR (Summary):

        The team researched small interface devices that are implanted underneath the user's skin. 


They discuss the implications of making a user interface available to the user at all times and the four core challenges that came with it. These challenges include: how to sense input through the skin, how to produce output, how to communicate amongst one another and with external infrastructure, and how to keep the device powered. They studied a device surgically implanted into a human arm and found that interfaces worked through the skin. Then, they discussed the method they proposed to implant their device, the medical implications of implanting such a device, and how they tested it with artificial skin made of silicon in order to avoid medical complications on the test subjects. 
Artificial skin made of silicon

The paper concludes that implantable devices are technologically viable and are the way of the future.

Related Works Not Referenced:
  1. US Patent No. 5,724,985 - Talks about a device used to communicate with implanted medical devices. Relevant to this paper's technology, but used for medical purposes only.
  2. US Patent No. 6,358,202 B1 - Talks about a network interface device implanted in a person in order to control artificial organs and dispense medications. Relevant to this paper's technology as it could very easily be implemented to extend the usefulness of this device.
  3. US Patent No. 2010/0185182 A1 - Talks about an implantable device that monitors and dispenses spinal fluids. Not very relevant to this paper's technology as it only deals with a medical problem.
  4. User Interface for Segmented Neurostimulation Leads- Talks about a device used to interface with a neurostimulator that is implanted in a person's head. Somewhat relevant as this paper's technology could be re-tooled to hooked up to a person's brain and used to control a device, but has no effect on it's current state.
  5. US Patent No. 2010/0268296 A1 - Talks about a programmable device to interface with and control a heart implant. Not very relevant to this paper's technology because it interfaces in reverse.
  6. US Patent No. 7,486,184 B2 - Talks about a coaxial antennae being used to facilitate an interface between an implanted medical device and a computer. Relevant only in design. This paper's device communicates using Bluetooth rather than coaxial antennae.
  7. US Patent No. 2011/0172564 A1 - Talks about a user interface device that is implanted and transmits the user's posture and physiological state in real time. Not particularly relevant to this paper's technology, but could be implemented in future devices.
  8. User Interface System for use with Multipolar Pacing Leads - Talks about a user interface for an electrode array implanted in a person that can be used to control an ambulatory medical device. Not particularly relevant to this paper's technology, but could be implemented in future devices to help users control their mobile device.
  9. US Patent No. 2010/010584 A1 - Talks about an implanted device that records and transmits user's posture. Not particularly relevant to this paper's technology, but could be implemented in future devices.
  10. US Patent No. 7,668,599 B2 - Talks about an implanted eye prosthetic that can be controlled with an interface. Not particularly relevant, but this paper's technology could be implemented in an eye or other prosthetic in the future.
Evaluation:

        The team evaluated their device objectively and quantitatively by testing it on a single subject to research the viability of this particular implantable device and possibly future such devices. They tested the system by part, testing the inputs and sensors both under the skin and outside the skin and comparing the two to determine the limitations and possible compensations for each input/output device on their own to determine the viability of each piece inside the body. With only a single test subject, this study is more of a proof of concept for this device and is meant to be a basis for furthering research. There are many implantable medical devices that are similar to this, but this idea is novel because it is used for non-medical use, particularly with mobile devices.

Discussion:

        Implantable devices, in my opinion, are the way of the future. This paper was a very interesting read and I hope that these findings will further or spark future research into the field. As long as the implanted devices are small enough, comfortable enough, and cheap enough, I think people will be more than willing to have a device implanted in their arm.

Tuesday, September 4, 2012

Paper #2: Touche

        For this post, I have put together a summary of Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects, a paper co-authored by Munehiko Sato, Ivan Poupyrev, and Chris Harrison. This paper was presented at CHI 2012 in Austin, TX and a full list of this paper's references can be viewed here under the 'References' tab.

Author Bios:
TL;DR (Summary):

        This team has created a novel touch sensing technology, called Touché, that uses a capacitive touch sensor embedded into ordinary objects in order to read touches and gestures. They began the paper by talking about their new technology, called Swept Frequency Capacitive Sensing (SFCS), which uses a device to "monitor the response to capacitive human touch over a range of frequencies", instead of only monitoring a single frequency. The paper then goes on to describe the science behind the capacitive touch sensor and how it uses capacitive profiles based on the human body in order to detect touch input across a range of devices.


Next, the paper describes the different devices that the team tested the gesture-recognizing technology on. These devices include door knobs, a table, a touch device, the human body, and a shallow pool of water. They went on to describe their testing methods which included two groups of 12 people that tested each gesture 30 times each on every device to calculate the accuracy of the device.


After finding the accuracy of each device, the team removed gestures for some devices in order to reach an accuracy of at least 95% for each gesture on each device and concluded that the technology, with certain tweaks to find the optimal capacitance range, could be ready for use in practical applications.

Related Works Not Referenced:
  1. SideSight: multi-"touch" interaction around small devices - Uses multiple sensors to gain gesture abilities for small-screened touch devices. This paper is novel in the fact that it only uses one sensor and can be placed on a multitude of different objects.
  2. Principles and Applications of Multi-touch Interaction (2007) - Explains the benefits of multi-touch and orientation-sensing gestures on touch surfaces. Does not deal with multiple surfaces or objects like this paper does.
  3. Multi-touch Interaction - Talks about the uses of humans' natural hand dexterity in the realm of touch inputs. Does not extend to multiple surfaces or objects like this paper does.
  4. Multi-touch Interaction Wall - Talks about the need for multi-touch interfaces for multiple users. This paper talks about multi-touch interfaces designed with single users in mind.
  5. Shallow-depth 3d interaction: design and evaluation of one-, two- and three-touch techniques - Talks about multi-touch capabilities used to interact with a shallow-3D display. Does not extend beyond 2D touch screen input.
  6. Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques - Talks about a technique that would allow blind people greater access to multi-touch enabled devices. Does not apply to multiple objects like this paper does.
  7. Empirical evaluation for finger input properties in multi-touch interaction - Talks about a technique applied to 2D touch devices that takes into account the area of the screen that is being touched, rather than a single x,y point. This technique could be applied to the sensor described in this paper to increase the amount of gestures available or possibly the accuracy of certain gestures already proposed.
  8. Multi-touch interaction for robot control - Talks about using multi-touch gestures to control robots. This is beyond the scope of this paper, but the sensor defined in this paper could be applied to this research.
  9. Low-cost multi-touch sensing through frustrated total internal reflection - Talks about a low-cost multi-touch option using a new touch-sensing technique that allows for cheaper sensors than traditional multi-touch devices. Interesting read, but the research does not apply to this paper as it is still only used on a 2D surface.
  10. TouchLight: an imaging touch screen and display for gesture-based interaction - Talks about a way to enable multi-touch gestures on a projector-based screen. Interesting technique, but still only allows for 2D displays and not other objects.
        Overall, the related works in this field are only related to multi-touch gestures on a 2-dimensional surface and do not extend into the realm of applying multi-touch capabilities to other objects.

Evaluation:

         This team used objective, quantitative measures to systemically evaluate their touch sensor by testing the sensor on a variety of objects, including door knobs, tables, the user, touch screens, and water, to show that the system as a whole works with gesture recognition accuracies above 95%. Using two groups of 12 people, the team tested multiple gestures multiple times on each object to ensure the accuracy of their gesture-recognition technology over a wide range of capacitances.

Discussion:

        This paper was very interesting because the use of touch gestures is a growing field in CHI and this novel approach will bring technology one step closer to being able to integrate seamlessly into everyday life.
The use cheap and minimal hardware is what makes Touché a viable option for use in any home or office and this research will also further my goal of having a 'smart' house.