Atlanta "Buzztanical" Gardens

HCI 6750 Project: Design process for successful Human-Computer Interface

Mamie Aldridge, Ron Barbas, Amon Millner, Yoichiro Serita, Maryann Westfall


Prototypes: Ver. 1 Ver. 2

DESIGN IDEA Personal information device for Atlanta Botanical Garden visitors.

 

Phase 4: Evaluation

  • Project Description
  • Project Summary
  • Task Analysis
  • Design Summary
  • Physical Description
  • Computational Description
  • Evaluations
  • Cognitive Walkthrough
  • Heuristic Evaluation
  • Think Aloud
  • Overall Evaluation and Recommendations
  • Critiques
  • Future Directions

Links for Part 4

 

PROJECT DESCRIPTION

For garden enthusiasts, the Atlanta Botanical Garden (ABG) offers a wealth of flora accessible through direct observation in natural habitats. This naturalistic resource does not offer easily accessible in-depth information about the various plants being observed. In addition, recording options are not provided to visitors — individuals or groups must rely on memory, paper and pencil, or photography to take information with them.

Visitors would benefit tremendously from a system that can provide more detailed information immediately at their disposal, and that can help navigate throughout the expansive gardens without destroying the naturalistic setting, and maintaining the group experience.

Currently, visitors who wish to obtain more information about a plant questions nearby ABG gardeners or goes to the on-site library. Many plants, whose markers have been broken, stolen or weathered, are difficult to identify. And gardeners may or may not be readily available to answer the detailed questions of every visitor.

The proposed Plant Tracker® system aims to provide the garden enthusiast with customizable levels of detailed information about plants of interest, even ones with missing labels. The system is being designed for ABG members and draws from many technologies that have been assessed by potential users. With feedback gathered from prototyped pager-sized textual/audio systems, PDA like information devices, and tablet computer ideas utilizing GPS technology, we have come up with a composite prototype.

An evaluation plan was carried out on our most mature prototype to rate the usability of our system. Usability bugs were then prioritized with respect to making a successful and usable product.

Project Summary

Task Analysis

During our task analysis, the model which most closely represented our user’s world was the knowledge-based model. Much of the user’s task is achieved through obtaining detailed information found in well-established databases and taxonomic structures.

User methodology was analyzed, and resulted in these primary methods of task fulfillment:

• search for detailed plant information by common and/or botanical name (of plants in view, or of which they know)

• search for detailed plant information by garden location

• search for detailed plant information by plant attribute

Design Summary

The proposed Plant Tracker¨ system aims to provide the garden enthusiast with customizable levels of detailed information about plants of interest, even ones with missing markers. The system is tailored to assist the user in finding information in the manner most convenient to him/her. While the targeted user group is ABG members, infrequent repeat visits directed us to assure ease of use and learnability. Users can also print or e-mail their findings to enhance their information-gathering visit. In addressing the need for maintaining a naturalistic setting and group experience, our proposed prototype draws from several technologies: a PDA tablet form factor with a large, viewable screen, textual and audio modalities that allow conversation, and GPS technology that eliminates the need for additional plant signage.

Physical Description

The 6.5" x 9" x .8", 1.2 lbs. Plant Tracker features an adjustable shoulder strap, an earbud, stylus-based capture, and 5.5" x 8" active matrix LCD screen. The frame of the device is 1/2" wide to accommodate holding without obstructing or inadvertently activating the stylus-based capture screen.

The Plant Tracker contains semi-powerful computing and information display in a unit that can be handled by users of varying ages without being cumbersome. The adjustable shoulder strap is a feature that affords users hands-free carrying, allowing them to perform other tasks, such as photographing plants or tending to small children. The strap also alleviates the weight of the device when not in use, and prevents the device from being dropped.

Users can operate the device with a stylus that rests in a niche in the molding when not in use, and is secured to the device by a 13" cord. The length of the cord is designed to reach all points of the screen without becoming entangled with the shoulder strap or other user accessories. The Plant Tracker also has a single earbud to provide optional audio plant information to the user. The volume of the earbud will be controlled by the user interface with a variable volume level from mute to loud. The stylus and the earbud will connect to the device at the top in the center to enable easy access to left- and right-handed users.

The device rests in a power cradle while not in use, eliminating the need for a power button since it is on standby while charging and on when removed from the cradle. Batteries will maintain their charge for a minimum of three hours. There will be a power LED that flashes when the battery is low.

Computational Description

The active matrix display will present menus to reach plant information visually (by ABG map) as well as textually. There are, generally speaking, three "modes" of interaction with the system:

1) Map search mode — The map is initially set to a bird’ s eye view of the entire garden. Information about a plant can be reached by locating it on the map and stylus-tapping it, causing the map to zoom in one level at a time, centered around the tap. This feature addresses inconsistencies in the environment in which plant markers may be missing or damaged. Zoom capabilities are available to get to a level of granularity in which plants can be displayed 40 pixels apart for ease and accuracy of selection. The HOME button can be used at any time to return to the bird’s eye view of the map, and indicate the user’s position in the garden. In addition, users can navigate to a different site in the garden to view a specific plant, by text search and then locating it on the map. If a user zooms into a map section that they are not near, the map will display an arrow pointing to the user’s location in reference to the current display.

2) Text search mode — To retrieve information textually, a search function is available. The SEARCH button will load the search screen, which offers four ways to search for plant information. Form fields associated with their respective search methods will be displayed when those search methods are invoked. The four search methods and their respective form fields are:

Common/botanical name search — displaying an adapted version of a qwerty keyboard and its associated input field, and a scrollable list showing plant fast find results (partial words are completed and plants containing those letters are displayed)

Garden area search — displaying a scrollable list of garden areas

Plant attribute search — displaying plant attribute drop down menus

In addition, for all types of text search methods, buttons are available for text displays and/or map locale, and a SAVE button to keep plant information in an optional "wheelbarrow" to take home at check out time.

3) Plant information display mode — a page shows details of the plant found and selected. Drill-down menus deliver the amount of detail that the user wants. A "whole plant" image and a detail image are shown, with a seasonal selector that changes the two views to show appropriate seasonal images. A volume control allows the user to listen to audio reading of the details as they click on the item.

All modes — buttons that will be persistently visible in all modes are: a HOME button to allow for recovery from any wrong turns at any time; a SEARCH button that will become disabled after the search screen has been invoked; a SAVE button that will be made available in the Plant Information page; and a Plant Tracker Guide button to recall a device tutorial.

Evaluations

The three methods of evaluation that we used were: cognitive walkthrough, heuristics evaluation and think aloud. The cognitive walkthrough evaluated the learnability of our system by highlighting predictability problems which have a large affect on learnability. The heuristic evaluation evaluated the customizability and task conformity by looking at key heuristics that Nielson suggests. Looking at the match between system and the real world, user control and freedom, consistency and standards, and error prevention uncovered whether or not experts and novices alike in the horticulture world could use the device effectively for their desired tasks. The Think Aloud evaluation evaluated the effects of potential users’ prior knowledge in relation to our interface, determining whether or not our design criteria rendered a system that allows a user to achieve desired tasks with their mental models of how the system should work.

Cognitive Walkthrough Methodology

Three HCI experts assisted us with our Cognitive Walkthrough. They were given a sequence of actions to perform (as delineated below and in appendix) associated with finding detailed information about the "Verbena Bonariensis" plant. This is a path that users will likely take when they come across a plant that has a sign and they would like to know more about. Most of the permanent exhibits in the garden have signs. Since those are the plants that our system offers information about, we supported this path in our cognitive walkthrough evaluation.

With our user group ranging in technological fluency from novice to expert, we felt that the action sequence should be explicit enough for the novice user. Use of the stylus was reinforced so that users would not be confused with touchscreen technology and so that they would be trained to follow an explicit set of instructions.

We chose the following path also because we feel that if there were slips and errors that could occur, they would occur in the text search mode thus giving us better feedback for future designs.

Action Steps:

1. Select the SEARCH button by tapping on it with the stylus.

2. Select the botanical name search method by tapping on it with the stylus.

3. Type in the botanical name "verbena bonariensis" by tapping the correct letters on the keypad with the stylus.

4. Scroll to and/or select the correct plant listed by tapping on it with the stylus.

5. Select the text display radio button by tapping on it with the stylus.

6. Tap on the FIND IT button with the stylus.

The HCI experts were then asked to answer the following four questions for each step creating a believability story:

1. Will the user be trying to produce whatever effect the action has?

2. Will the user be able to notice that the correct action is available?

3. Once the user finds the correct action at the interface, will she know it’s the right one?

4. After the action is taken, will the user understand the feedback given?

The cognitive walkthrough evaluators were each given a form that described the device, listed the user characteristics, defined the task and the action steps, and provided space for the evaluators to answer the believability questions. The date and time of the evaluation was also recorded on the form, and the web-based prototype was loaded for the evaluator to use, the mouse pointer representing a stylus. The results are summarized below. The full response forms can be found in the appendix.

Cognitive Walkthrough Results

Listed below are the assessments of our system by the Cognitive Walkthrough evaluators. Included are selections from the evaluator’s observations, slips, and suggestions (complete evaluation forms are included in the appendix). In the Analysis of Expert Feedback section we provide our assessment of the expert responses and our evaluation process.

Response to Action 1 — Select the SEARCH button by tapping on it with the stylus:

There was a comment about the use of the word "search." Some believed that it invoked a desire/expectation for a text box. Our interface has an intermediate step which, presents the user with multiple search options. Therefore, the system feedback was met with mixed results.

All evaluators agreed that this action step has the ability to be fumbled through by a user. The results of the Cognitive Walkthrough did not show any other need for major improvement with this step.

Response to Action 2 — Select the "botanical name" search method by tapping on it with the stylus:

Evaluators brought up an excellent point about being able to tap on the text associated with the radio button and having the radio button appear selected. The users have to currently click within the radio button, and accuracy becomes an issue.

Response to Action 3 — Type in the botanical name "verbena bonariensis" by tapping the correct letters on the keypad with the stylus:

The evaluators mentioned that the user would probably know what to do, but they could benefit from a redundant cue or instruction (probably located directly above the input field) that explicitly tells them to type by tapping on the keyboard.

Response to Action 4 — Scroll to and/or select the correct plant listed by tapping on it with the stylus:

There was some ambiguity regarding whether or not this selection would take the user to the plant information page, but the feedback that they received when tapping it was sufficient to show them that this action only highlights the selection.

Response to Action 5 — Select the text display radio button by tapping on it with the stylus:

The term "text display" and the resultant page with text and photos were found to be confusing to some. An evaluator mentioned that users have a page of text associated with the word text. Thus, seeing our plant information page as a result with pictures could surprise users.

Response to Action 6 — Tap on the FIND IT button with the stylus:

This action was self-explanatory and considered very learnable with no other discussion.

Heuristic Evaluation Methodology and Results

Listed below are the assessments of our system by the Heuristic evaluators. Included are selections from the evaluator’s observations, slips, and suggestions (complete evaluation forms are included in the appendix). In the Analysis of Expert Feedback section we provide our assessment of the expert responses and our evaluation process.

Five HCI experts were asked to evaluate our entire interface in terms of the selected heuristics (adapted from Nielson’s list) for learnability, task conformance, and customizability of our system. We selected five heuristics that our group felt would yield the most useful feedback and can be evaluated in a reasonable amount of time by our busy experts. The following heuristics were chosen and received the following feedback:

1. Match between system and the real world

Evaluation methodology: An environment like the ABG attracts people who speak the language of horticulture. Our system should present the information that they would expect to find out about a plant and be accessible through a logical sequence of actions. A major part of the interface is an actual map of the garden. It must be tested to see if the user can use that representation of the real world ABG to get to the information they desire.

Evaluation result: One evaluator brought to our attention that there was no indication of which way the user was facing on the GPS-enabled map. In the real world, the user would need to know that information to navigate efficiently. It was also noted that the SEARCH button in the real world often is followed by a screen that can take the search text immediately. Our interface presents more search options after the button is tapped. They felt that with users having different expectations, it may affect learnability.

It was also reported that users would expect to see the boundaries of the garden area in which they are located at the HOME map. (See Future Directions — 1.)

2. User Control and Freedom

Evaluation methodology: The Plant Tracker should allow a user who makes a mistake an easy way out. We provide a HOME button on pages other than the home screen (bird’s eye view of the map) to give "lost" users a way home. We suspect some users will accidentally zoom on the map and wish to return to the zoomed out view. It is necessary to evaluate whether or not the user knows how to return to where they’ve been. Our usability depends on whether the user is comfortable moving around in the interface. This comfort cannot be achieved if they feel they don’t have the freedom to explore areas with the knowledge that they can always easily return to a familiar place. This is an important aspect for our design, as a person may want to abort an action and search for another plant in another mode. (See Future Directions — 2.)

Evaluation result: The majority of the evaluators felt adequate amounts of control and freedom in our interface. One evaluator noted that the map navigation mode should be automatically updated showing plants in the vicinity of the user as they walk through the garden to have more freedom to walk around and not tap down on the map until they reach the plant near them.

3. Consistency and standards

Evaluation methodology: We have made efforts in our system to reduce the amount of guessing a user has to do while using Plant Tracker. The scientific name of a plant is unique to that plant so users have a way to consistently receive correct results when they request data. When scientific names aren’t available, we attempt to make it clear to the user where they are on the map using GPS and a "you are here" indicator, enabling the user to zoom down to locate the plant they are standing in front of. We will evaluate if there are any ambiguous signals or times we may not have uncovered when the user feels confused or that we have an inconsistent interface.

Evaluation result: Multiple evaluators felt that the interface was consistent in general, but had a breach in the following area. The SAVE button appears only on the PLANT INFORMATION page. The evaluators believed that this might lend uncertainty to users — they might wonder how long that option had been available, resulting in concerns about users focusing on the interface instead of their tasks.

4. Error prevention

Evaluation methodology: Measures have been taken to reduce potential errors by accruing user assessments. We have iterated our design multiple times to ensure that we prevent errors from occurring. Nonetheless, errors have a way of surfacing, so we will seek experts who may bring to our attention that the system feedback adequately informs users when errors have occurred.

Evaluation result: A BACK button was suggested as well as a "bread crumbs system" to allow users to recover incrementally in the event of an error.

5. Help and documentation

Evaluation methodology: A system that stands to be used infrequently by any given user needs to have help functions available.

Evaluation result: The maturity of the prototype at the time of evaluation did not feature detailed help. This portion of the evaluation could not be accurately assessed at the time of the evaluations.

Think Aloud Methodology

Listed below are the assessments of our system by the Think Aloud evaluators. Included are selections from the evaluator’s observations, slips, and suggestions (complete evaluation forms are included in the appendix). In the Analysis of Expert Feedback section we provide our assessment of the expert responses and our evaluation process.

Our group felt that it was important to perform a Think Aloud evaluation to observe the mental models that potential users bring to the table when encountering the Plant Tracker. As designers and experts in our field, it is difficult for us to evaluate our interface without extreme bias. It is therefore very useful to get feedback from potential users who do not have such biases. With their input throughout the design process, we can prevent design inertia.

Potential users were asked to use the think-aloud method of analysis. After a quick debriefing, the user was expected to perform a task with the system while verbalizing their thoughts and what their gut feeling (mental model) is telling them to do. Evaluators were given the following task, and allowed to complete it with whatever action steps they deemed appropriate:

You are standing in front of a plant that interests you. You are considering planting this flower in your yard, but you would like to know if the flowers are available in other colors. Your task is to query the system to find out what other colors are available.

We used this method to observe:

• Observation 1 - what slips were made and how persistent and frequent they were

• Observation 2 - what optional action steps were taken, and how persistent they were

• Observation 3 - how often the evaluator asked for clarification

• Observation 4 - where their eyes traveled

 

In order for the users to evaluate our system without disturbing the office workers, we brought a laptop computer with us. After the evaluators read the scenario, we gave them the mouse and told them to "query" our web-based system. While the evaluators were verbalizing their thoughts, we were observing them and taking notes.

We had three evaluators participate in our evaluation. The evaluators are referred to as Evaluator 1, Evaluator 2, and Evaluator 3.

Evaluator 1 — attempted completing the task using botanical name search method

"Search verbena, and search [button] is here..."

"Common name or botanical name..."

"Try botanical name..."

Evaluator clicked "V" key and selected "Ver" from the list shown.

"...and..."

For some seconds, evaluator remained still.

"What should I do next?"

Evaluator was instructed to tap FIND IT button.

"OK, push it, and here’s the verbena!"

"Another color...is lavender."

Evaluator 2 — attempted completing the task using common name search method first, then botanical name search method

"I’m gonna start from the SEARCH button..."

"Select search method, OK, common name..."

"...and I wanna type the name, but how can I put it?"

Evaluator was told that "verbena bonariensis" is a botanical name.

"OK, change to the botanical name search..."

"Then, push the ‘V’ button..."

"Then, select ‘Ver’... "

"Plant info page, OK..."

"And FIND IT."

"I’ve got it."

"Find the color, right?"

"Color, color, color..."

Evaluator is looking around the whole page with attention.

"Here."

"Oh, OK, dark green and lavender."

Evaluator 3 — attempted completing the task using the map method, then botanical name search method

"OK, so, we’re here, right?" pointing to the red flashing point on the map.

"Click here, then we’ve got another zoomed map..."

"Here we are, and zooming again..."

She tried several clicks on the map. Since our web-based prototype has only one clickable area on the map for a different type of search, the evaluator could not find a clickable area at first.

"And once more, oh, we could reach the information!"

"It’s verbena, and the color, another color is... lavender."

Think Aloud Results

The answers to our observation questions are available in the following table:

 

Observation 1

Observation 2

Observation 3

Observation 4

Evaluator 1

None

N/A

1 time:

Evaluator 1 did not know what to do once the plant name was selected (Evaluator 1 did not know to press FIND IT).

Yes:

Evaluator 1 eyes were constantly looking over the page

Evaluator 2

Tried to search for the plant through the common name method. Then had to go back and search under botanical name.

N/A

1 time:

Evaluator 2 did not know whether Verbena was a common name or botanical name.

Yes:

Evaluator 2 eyes were constantly looking over the page

Evaluator 3

Was searching for the hover icon, and finally found it; continued to zoom.

N/A

None

Yes:

Evaluator 2 eyes were constantly looking over the page

 

First, if the double tap to zoom out function was implemented, it would not be a good function. The evaluators would click in an area that he or she thought corresponded to the area of the plant’s location and after realizing that they were zooming in on the wrong area, suffer to return to the previous window because of multiple undo steps. The evaluators had different ideas of double tapping. Their double taps were slow and would result in two single taps, zooming in twice instead of zooming out once.

Second, the FIND IT button is a problem. After one of the evaluators selected the plant name, nothing happened. The evaluator was waiting for the plant information to just show up. The evaluator had to be told to tap FIND IT. This could be a problem for our users, especially those who are not familiar with computers. The FIND IT button might need to be implemented as a forcing function. (See Future Directions — 3.)

Third, in the search method, users are not sure if the plant name is botanical or common. There is gap in Norman’s 7-stage model between the goal and the intention. The user wants to search for the plant by a name. Not knowing whether the name of the plant is common or botanical could be a problem. The users could be trying to query the wrong database because the system makes the user choose which method to search by. (See Future Directions — 4.)

Fourth, there is a problem with our layout of the plant information page. There is a lot of information on the page, and the text is small. The evaluators spent a lot of time searching for the plant color. (See Future Directions — 5.)

Fifth, the non-implementation of the shovels, our garden metaphor for digging deeper to get more information, caused problems for our evaluators. The evaluators tried to click on the shovels and nothing happened. Therefore, if they were implemented, they would work perfectly. The shovel representation was a good design.

After the evaluators finished the given task of querying our web-based prototype, we rewarded them with chocolate candy. The evaluators and the security guard enjoyed them.

 

Overall Evaluation and Recommendations

Prototype Maturity Issues

The prototype did not feature fully functional HELP or typing functions, making it somewhat difficult for users to understand the system feedback in Think Aloud. The prototype needed to show a list of flower color instead of leaf color to perform the Think Aloud task. It also needs to show a list of plants when a garden area is selected. The prototype was not the actual form factor, and did not have functioning GPS or audio capabilities. Therefore, some of our design criteria could only be evaluated in terms of their inclusion and theoretical benefits.

Criteria Issues

Learnability — the learnability of our system was reported to be the area that requires the most consideration for future prototypes. The consistency was reported to be a severe problem by multiple evaluators during our evaluation.

Task conformance — task conformance of our system was reported to be in need of more control being granted to the user. We think that most of the criticisms here point to more attention needed to recoverability. The requirements that were considered lacking by our experts were:

• BACK button or some other feature which allows user to return to a list of plants from a specific plant information page. This feature is necessary when the "plant attributes" search method is used, and the system returns a list of candidates for the user to view. This feature might also allow incremental recovery in other areas.

• The user might need to select if lists should be alphabetized by common name or botanical name, when lists are made available in the plant attribute or garden area search methods.

• Include a DELETE button on the save list for editing of the list of plants.

• GPS symbol needs to indicate the orientation of the user on the map.

Customizability — Users spent a great deal of time scanning the long list of plant attributes on the Plant Information page searching for flower color. The numerous categories might need to be visually grouped, or simplified and made accessible through an extra layer of drill-down menus. The icon for digging might be not be distinguishable as a cue to drill down for more details.

Other criteria — for take-home options, and maintaining group experience and naturalistic setting, we received positive feedback that they were met. Non-functional parts of our prototype (as listed in Maturity Issues) made it impossible to qualitatively assess these items.

Critiques

Reflections on our Choices

The evaluation plans that we chose for our system were appropriate for our system considering the time and resource constraints. The evaluation part of our project was allocated three weeks and we defined HCI experts as our classmates (or faculty) with HCI experience. Taking this into account we feel that our evaluation plans were designed to maximize the results given our resources. We uncovered some (but not all) usability discrepancies with respect to the criteria we focused on. It was effective in uncovering changes that would improve our system if we were to continue in the future.

As we entered part four of the design process this semester, we had to make some changes to our computational prototype to add the functionality necessary to complete the tasks that our evaluators could accomplish. Due to time constraints, we felt that it would not be feasible to program in the functionality that would allow evaluators/potential users to see the mistakes that they were making, only that the correct path was functional. We utilized the "null response" to indicate that the wrong action was being taken. This decision lead to confused evaluators, since they were unclear at times if they were performing the wrong task or they were performing the right one, but with an incomplete prototype.

We found that it is difficult to predict what an evaluator will discover about our system. We not only received information that we’ve heard before, but we received comments and suggestions that we did not expect to get from our evaluation plan. We concluded that while it is not feasible to target the results of the evaluation directly to the expectations of the designers, the beauty of expert and user evaluations is that non-targeted issues become evident. We designed our plan to give us the best chance to get results that related to our criteria and feel that we achieved that, in addition to other issues that were pointed out.

While our initial project was narrowed down to the membership subgroup, we discovered that the infrequency of our users’ visits and other non-expert motivations that placed people in this subgroup led to a "kiosk-targeted" user group. As a result, learnability became a primary focus for a successful design, as opposed to other criteria better focused for "expert users."

We had initially included recoverability in one of our key criteria, but eliminated it to accommodate time constraints with the scope of our project and with our evaluators. In retrospect, we might have kept recoverability in our evaluations as it was addressed in various ways throughout our evaluations.

It would have been better to test our evaluation forms for clarity with individuals outside of our design group before administering them to our expert evaluators/potential users.

Analysis of Expert Feedback

We feel that some of the expert feedback focused on an incomplete prototype rather than an ineffectual interface design. This is due to an evaluator’s view of any prototype as being very close to a fully functional interface.

In earlier iterations of our design, we performed some user assessments with novice users. A majority of the feedback from these assessments suggested that we simplify our initial screens. We did so by removing some of the buttons which would not be used in the early action steps that users employed to achieve their task. Much of the feedback from our expert evaluators suggested that we reinstate these buttons. Clearly, a designer’s dilemma exists! More assessments might help to determine this direction.

Future Directions

If the Atlanta Buzztanical Garden project continues beyond the wonderful CS 6750 course, we think that it is best suited to go in the following directions:

1. As mentioned in our Part 3 report, employing Alice Woodruff’s Tap Tips (which show silhouettes around active areas) would be very helpful in identifying garden and/or plant boundaries on the map.

2. ABG currently does not have an accurate to-scale landscape map of the garden. It is essential for this item to be created and analyzed — the actual level of zoom could be ascertained, and recoverability (BACK buttons or features) could be better designed. Also, this item is essential in keeping the system’s zoomable map updated.

3. A frequent error in our system was the expectation that once a plant name appeared in the plant list, the system would automatically call up that plant information page. The FIND IT button could be implemented as a forcing function — it would not show up until after the plant name is selected, and the user might presume that it must be tapped to continue.

4. In addressing a gap in Norman’s 7-stage model in which the user might not know if the name by which they are searching is the botanical name or the common name, our list of names could always default to "all." This would result in a comprehensive list of all names in the database, from botanical to garden areas. The user could then narrow the list by "knowledge in the head" if they know which category of name, or by "knowledge in the world," if it is evident on the plant marker which name is botanical and which is common. If they can’t narrow the field, they can search the entire database.

5. A frequent and persistent problem was the time needed to locate the specific plant detail on the Plant Information page. Users took a long time scanning the many categories listed to find flower color. One solution might be to organize the current 13 categories into three or four major categories such as Appearance, Plant Care and Botanical Info. Color coding these major categories might also facilitate scanning.

6. The gardener’s journal metaphor was implemented in a cursory fashion in this prototype. A more thorough prototype should include icons, graphics and terms that more artfully and thoroughly employ this metaphor, thereby enhancing and supporting the system-to-real-world match. Some of the graphics discussed were page boundaries that look like pages from a diary, voice that sounds like a gardener (farmer), water, sun and pruning shears for plant care icons.

7. An ambitious future direction that was initially discussed was a user’s ability to "create their own garden" by selecting visual representations of plants and placing them in a 3D environment that the user could walk through. This direction might most successfully address the subgroup population of members who visit ABG specifically to generate ideas for a real-world landscape project.

8. Another ambitious future direction that may be feasible is a web-based application of the "Virtual ABG Garden" used by ABG as a value-added feature of membership.



Phase 1: Understanding the Problem Phase 2: Design Alternatives Phase 3: Evaluation Planning Phase 4: Evaluation