This project started with a real challenge I face at school.
In Fall 2015, I took a class called “cinema theory and practice”. Our professor asked us to create short visual essays for themes such as “door and window”, “eye and gaze”, “skin and touch”. We watched movies for hours and hours trying to find clips that contain specific elements, which was a tedious and tiring process. Sometimes, we had a particular scene in mind but just couldn’t remember the name of the movie or the director. I came up with the idea of making a keyword-searching enabled application that categorizes movie clips based on their attributes.
Process documentation / App Demo
1-1. User Interviews
I interviewed the professor who taught cinema theory and practice course and 5 graduate students from NYU Tisch graduate program. They told me it would be great to have a tool that helps them remember or search forscenes that have a common theme, especially when they forgot the title of the movie and the name of the director.
1-2. Ideation Process
I came up with a few solutions to the problem. Solution 1 did not work because many of my users cannot remember the look of a particular director. Solution 2 was inspired by how users describe a scene to a friend when they forgot the name of the movie. Solution 3 was based on a precedent from Tyler Henry, a Parsons Graduate student, in which he uses gesture as input to aggregate clips. For instance, if you wave your hand at a screen, the screen will show all clips that have hand-waving gesture to you.
1-3. Decide on one solution
Solution No.4 is a keyword searching enabled app that uses scene labeling to describe scenes and group them by their attributes. For instance, when the user input keyword such as “closeup”, the app will show search results of the scenes that has the tag “closeup” with it. Users told me solution No.2 and No.4 are the most practical and useful ones so I combined them to create my prototype.
1-4. Flow Map
Before sketching, I created a flow map of the core interaction of this app - search and watch clips. I break down the process into the following steps: User identifies the need to search, search by keyword, narrow down by attribute or category, watch the clip, save the link.
1-7. Clickable Prototype
I created this clickable prototype using Adobe Experience Design. I want to find out whether my users can search relevant clips based on keywords, so I tested my prototype with 5 users using the RITE method. I gave them a simple task: Find a clip that has the keyword “closeup” attach to it. There were two major takeaways: Users told me instead of categorizing by “Movies” and “TV Shows”, consider narrowing down the search by attributes of the clips such as “action, technique, object, emotion” which can help them find a specific clip. Also, Two users tapped on the button-like tag instead of the search bar on top when I asked them to search by keywords. They said those tags fragmented their attention so much that they missed the search bar on top. I changed the tag to an underline style and tested with the subsequent user.
1-8. Iterate based on user feedbacks (1)
Changed button-like tags to underline style tags
1-9. Iterate based on user feedbacks (2)
Changed segmented control from “Movies/TV series” to detail attributes of clips
1-10. Code in iOS Swift
I learned iOS Swift using online tutorials in three weeks and was able to implement the basic functions for this app: keyword search in a built-in database, reveal detail view from a custom master table view when the user chooses one table cell, play and pause video, access photo album to upload photos and videos, etc. Check out the demo at the beginning of this project.
Google CIO - CSAT Dashboard
Team: Ruth (MBA), Elya, Hong, Trishala (Engineers), Me (Designer)
In the course of Fall 2016 semester, I worked with 4 graduate students from Cornell Tech Program on the challenge that Google CIO team posted, - “How might we use natural language processing or machine learning to better assist people who call Google for support?”
We met with 2 Senior Manager of Google Customer Support Applications team on a weekly basis to discuss the scope of the project, map out the pinpoints of their workflow and conduct user testing sessions with project's stakeholders. I am the sole designer on the team and I was responsible for the interaction design of the dashboard, conducting usability testing sessions and crafting the final presentation.
Businesses use customer satisfaction surveys to understand how their product and agents are satisfying their customers. These surveys have extremely low response rates ~15% making it difficult for the business to improve satisfaction at all levels from product flaws to individual agent service. Our solution is simple yet powerful: Replace CSAT surveys with sentiment analysis giving the company information on 100% of customer interactions enabling them to use data to improve the customer experience.
1-1. Simplified System Diagram
In the very beginning, we generated a simple product loop. Customers who call google will receive a survey at the end of the call, the result of the survey gets send to agent managers, VPs, and other stakeholders to make business decisions. Those decisions will hopefully improve customer's experience.
1-2. Explore solutions
Based on the product loop, we came up with three ideas to improve the customers' experience. The first solution is assisting customer while they are waiting on the line.The second solution turn speech to text in the menu to cut the service time. (already exist in other companies). The third solution uses sentiment analysis to better understand customer's emotion real time.
1-3. Focus on Solution No.3
We decided to focus on solution 3 because agent managers from Google and other companies validated the need of getting more accurate CSAT scores. Replacing survey with sentiment analysis benefit both the customer who hates filling out surveys and the business owner who do not receive enough survey responses.
1-4. Prototype a CSAT dashboard
In order for the agent manager to act upon the data collected in sentiment analysis. We need an interface, in this case, a CSAT dashboard product for the manager to interact with. Managers said that they want to use this tool to find the agent who needs more training quickly. I started with some sketches of this dashboard product and created some clickable prototypes for our target users to test.
1-5. User testing sessions
I tested my prototype with 10 users using the RITE method (rapid iterate test and evaluation). I gave them a simple task - Imagine you manage a team of agents, find the lowest performance agent in your team and listen to a sample call.
1-6. Historical Performance Tab
This is the historical screen where managers can see the overall performance of the team compared to the last few weeks. A designer from Google Creative Lab pointed out something I overlooked - the meaning of colors. I am training the users to recognize green as positive and red as negative whereas, in the BEFORE screen, I used red to indicate "today" which should be neutral.
1-7. Team Tab
These screens show each agent’s performance in the team. An Assistant Vice President of Merrill Lynch bank said that the CSAT score of each agent would be meaningless if the volume of calls was not considered. Also, he would like to know how we calculate the average number for each agent. Based on that feedback, I changed the structure of the data to show both thescores and the volume of calls for each agent.
1-8. Agent Tab
Stu Katz from OrderGroove and Will Foley from Splash both having experience managing a team of agents. They mentioned that the ability to listen to a sample call is crucial. Also, a Google CIO senior manager told me he would like to see the CSAT score of the agent here.
1-9. Proof-of-concept Prototype
This is a proof-of concept prototype that demonstrates the use case of our product. As an agent manager, he can browse the overall performance of the team from the historical performance tab. When clicking on the team performance tab, he can see compare each agent's CSAT score along with the number of calls as a reference. He can then select the agent who has the worst performance to view detail. In the agent tab, the manager can see the overall CSAT score and the distribution map of that score. He can also listen to a sample call by clicking one of the links. A sample call player will pop up upon click and play the conversation with real-time sentiment analysis. The manager can even click on the low point in the conversation and find out exactly what is happening there.
The workflow starts with the vet viewing diagnostic result and end with some kind of communication among vet, stuff within practices, IDEXX consultant, and pet owner.
1-2. User testing plan
We went to more than 30 practices in NYC and interviewed remotely with over 50 vets who are using IDEXX app in their daily work. We showed our prototypes and watched them complete simple tasks to validated our design assumptions.
1-3. Current App
This is the current app. The only action user can perform is to share the test result to others through email. Our job is to Identify potential functions in IDEXX mobile application that would simplify vets and technicians’ daily practice.
1-4. Methods vs. Recipient
We first tested whether people think about communication by methods (call, email, SMS)or recipient (IDEXX, office, Doctor, Pet Owner). 87% people we tested said that they liked the "method" version better simply because they are familiar with those icons.
1-5. Icons placement
And then, We tested whether vets prefer to have the icons at the top, or at the tab bar on the bottom. To our surprise, most of the vets preferred petowner related actions separate from everything else. They said that it would cause panic and worries if they call pet owners by mistake. So, we decided to group owner related actions on the top, and everything else in the tab bar.
1-6. Block phone number option
In our user testing, vets mentioned that they want their phone number blocked for privacy reason. So, we added the option to block phone calls at the bottom.
1-7. Clickable Prototype
I produced 13 clickable prototypes using Photoshop and Invision. The prototype that we delivered at the end of this project has 90+ screens linked together.
New York City Ballet Checkout Flow Wireframe
This is a Redesign New York City Ballet Checkout Flow
Android Wear - Fertility App
- Contextual Notification
The app reminds user when it is the time to record temperature
- Selection List
A simple list optimised for ease of use on a small screen where the focused item snaps to the center of the screen and a single tap selects
- Manual Input
The focused item becomes pink and centered. A single tap selects.
- Handheld Device Input
Provide a positive feedback for the user that the information has been saved to the backend
- Action Button
Because uses fertility information is very private and should not be exposed on a watch unintentionally, this action button serves as a confirmation to view user’s statistic
- Further Actions
The user can view all statistics by simply swipe and select.