0 item

Your shopping cart is empty

Continue Shopping

Case Study - Rescue Lens

What is it?

It's a live interactive video support product: an operator can see a problem through a customer's mobile phone's camera.  

Lens allows a real-time video stream between technicians and end-user customers via their Android or Apple smartphone cameras to quickly identify technical issues of any kind for just about any product or service — from a broken dishwasher to machinery too large to move. Remotely guide end users through troubleshooting, problem resolution, product setup, and more.

See more here: Rescue Lens official page


Normally, if someone wants to come out a new product one should be done is a competitive analysis, checking similar products, doing some interviewing, etc. 


It was a completly new product in the end of 2014. So no competitor, no users who use it, we cannot do any user interview ... on the other hand we had a very-very serious deadline from the leads.

More details:

  • customer side:  It's a mobile app ( the biggest challenge)
  • Agent side: we had an existing interface for Remote Control (for desktop) so we just modify that UI and used it for this product
  • No previous design time - development and design started in the same time!!
  • it had a disadvantage and advantages as well:  disadvantage is quite obvious, not proper design time, advantages: we could have a demo mobile app version 

  • in most of the use cases the users will use this app once
  • the mobile app for the first release contained a chat as an only communication mode with the agent. Later is it planned to add audio as well
  • We had 6 month only to release a brand new product in April,2015!

The process

I had an initial idea - but no app at that time. 

So I decided to do user test No1: it was actually a paper prototype

Why paper? it was fast, and we had  an initial feedback about if we are on the right way!

2 weeks later the developer guys were able to come out an initial version of the mobile app. 

Hooray! Let's do user test No2: with a very early version of the app! 

We tested the followings cases:

  • writing / receiving a message, Pause / resume the camera, swipe up / down the chat, disconnected - the app try to reconnect, ending the session, 

One week later we had a newer mobile app version, so we repeated the same test - It was user test No.3 !

We still tested the same scenarios like in test No. 3. 

At this time more or less we had the full app flow covering almost all scenarios.  

This is the flow for the Android version of the app: 

Designing the menu system

One of a biggest challenge was designing a menu to this app. I and the UI designer and the PO preferred a kind of "hamburger" menu: 


  • it will be available in the chat as well
  • flexible: later we can easily add new menu items 

Actually the developers had ideas about "3 dots" and "floating" type of menus besides our (=designers) hamburger menu idea. How we could decide which menu we should use in the app? 

The developers were able to create these three different versions of the app, so we managed to test it with users (user test No 4)! 

This test was huge test - actually with 6 users! We tested each of the menu versions with two users.

Though the main goal of that test was that which menu we should use but we had additional goals:

  • general feedback from users
  • do we have any serious usability issues about the app
  • figure out additional improvement points

Result of the test

All menu can be usable, so we finally chose the hamburger version - our original idea.

1. Positive:

 - overall very good experience, no serious usability issue

2. Should be improved:

The most important was:

For Pause and when disconnected and try to reconnect the application we used an upper bar. The users usually used the menu instead of the upper bar for resume from pause.

Consequence: the affordance of the action text in the upper bar is low.

Next step: give more affordance to the action part of the text: break the text into to line, with more space, and giving more highlight the action text:  "Tap here to resume"

3. Problems

The test was useful because we detected some problems (not serious) so we had the chance to fix it.

Initial screen design

The design of the starting screens was difficult a bit:

  • the users should get a hint of what they could do with the application
  • but we know (it's a fact) that users usually skip these screens as fast as possible
  • so we should put not too many text to these screens and we should limit the number of screens as well
  • we should not show the hints for 2nd, 3rd start, so we should use different screens for 1st start, and not 1st start of the application
  • plus for scammer protection we should highlight that one should allow access only trusted persons

Plus for iOS there is an additional system screen for the allow camera case.

As a result we finally came out this flow for initial screens:

Update - Adding audio

After almost one year we decided to add a significant update to the product: an audio feature!


  • because we were aware of from the beginning that in the customer side holding the phone + typing + follow the agent's instruction and do something ... it's difficult. 
  • on the other hand chat was good for testing the MVP (the previous version): now after it turned out that this product make sense we can add audio

This time we thought we should use direct buttons instead of the hamburger menu. 


  • because showing directly the buttons provide much easier access
  • there is a direct feedback to the user that the camera, the mic either turned on or off

The developers created the prototype of the app with the new UI and we tested it with some users. Since we had not encountered any usability issue we decided to go ahead and use this new interface.

< Back to Portfolio

Design process - Check                    Next: Case Study 2 >