HYOWON LEE  B.Eng., M.Sc., Ph.D.
click to visit SUTD home page  
Designer-Researcher at Singapore University of Technology and Design
   Research   Publications   Teaching   C.V.  

Here is the list of the major events & work (in reverse-chronological order) I have been doing in 2004.


CCTV search interface design (October - November)
CCTV records video constantly thus a lot of video data is stored and difficult to retrieve when we want to find a particular segment from it. I have been sketching a system interface that allows a user to query and browse from a large database of recorded videos from 150 CCTVs around a campus. Connecting to our L'OEUVRE project, the underlying system uses object and event detection to effectively highlight, select, trace the objects within video. The design is based on the more generic sketch I did in January this year for L'OEUVRE, but this time the constraints for usage (specific location and geography, number of cameras, and Alan's scenario) made this one far more concrete and realistic.


User-interface design for Físchlár-TREC2004 (April - September)
I have been sketching draft user-interface for this year's participation in the Search tasks for TRECVID2004. Our group has every year developed a new interactive system for TRECVID participation. This year, I started early with sketching based on Georgina's original layout of last year UI, further refining it to combine all elements more coherently together. Our group discussed the sketch in the meetings, on which I drew updated version of the sketch and showed at the following meeting. During the meeting the group brainstorms by looking at the sketch, from which I modify the sketch and prepare for the following meetings. This process continued for a few months, and in early July we have reached a point where we had (roughly) agreed on all elements.

I divided the screen area into two: Work area and Administrative area. Work area occupies most of the screen and users will be searching/browsing in this area. Admin area is for everything else - mini clock, task number and save shot list. Instead of just having a separator line between them, I introduced a "work plain" that sits on the screen. Also used this year are the 3-D buttons and curvy lines that are not too intrusive or distractive, and on the contrary, combine all different elements on the screen into one coherent theme.

This final interface sketch was then turned into mock-up HTML version, and the HTML version into XSL stylesheets to be used in our XML architecture of the Físchlár system. The system was completed by early September and used for the final Search task experiment, after which the results were submitted to NIST.


CIVR2004 (21-23 July)
Our group hosted CIVR2004 (3rd International Conference in Image and Video Retrieval) in the campus. While lots of work had to be done since last year for the preparation, at the event Ciaran and I arranged CDVP demo sessions: 4 data projector stations each with a laptop in the hall of Nursing building, to demonstrate CDVP's work and our working systems to the conference delegates.

During the demo sessions that spanned 2 days, I took charge of Físchlár-News Story system demo, which has been operational for 15 months now. I demonstrated by introducting what the system is, how it works, and outlining the long-term user evaluation I've been conducting during March-June. This system nicely showcased the complete cycle of an experimental system (design, implementation, deployment, and user evaluation).

The 3-day conference finished successfully.


Físchlár-News usage study (March - June)
In January and February, I have drawn a detailed usage study plan for Físchlár-News system, and have started a long-term user study based on it. Most user-oriented evaluations we have conducted so far (especially on Físchlár-TREC family systems) have focused on obtaining users' opinion after first introduced to the system and using it for intense 2-3 hours. While this is useful for getting first impression of the system and the interface efficiency for first-time users (focusing on learnability), systems such as Físchlár-News are not for such a purpose: the usage scenario of Físchlár-News as drawn in Alan's RIAO 2004 paper, is long-term, user accessing the system daily, checking some news stories in the middle of work, and doing some lazy browsing of related news, checking on recommended stories, and going back to work. Interruptions and multiple window use will be common way of using such a system (completely different from the focused, non-interruptable 3 hours in a lab as in TREC interactive experiments). My study plan is based on accommodating these normal, daily usage of the system in a long-term basis.

I have set a list of questions I want to be answered, and adopted some mixture of methods available from usability engineering to answer these questions. Questionnaires and indirect observation with incident diary and interaction logging have become the two major ways of obtaining usage information in this study. Above diagram shows the period of usage by each participant, shows my busy schedule at the beginning of April (introducing the system and study to participants) and beginning of May (debriefing period) this year.


Object- & Event-based video interface design (13 January)
I did initial sketch and mock-up interface design for L'OEUVRE. If a design involves improving or refining an existing interface, we don't go through 'conceptual design' stage, as most of Físchlár interfaces have been the case (except the original Físchlár-TV interface design in 1999). Designing an interface that allows object- and event-based interaction is something not available anywhere in the world at the moment. Research in experimental systems that do object and event detection are in a very infancy stage that the consideration on its possible front-end seems still far ahead: usually when an underlying technology becomes more mature (for example shot boundary detection), an interaction based on such a technology gets some attention and considered properly. My design was based on the assumption that the object and event detection is already mature - what would be a good interaction mode and style that allows the user to query and browse in a system where automatically identified objects and events in the video collection are available? I started by thinking about limiting 'unit representation' of videos into:

  • Shot
  • Scene
  • Programme
and then suitable visual and interactive representations for these units. My focus was to use a very effective way of highlighting identified objects and events in video and double representation of these objects and events for the user to interact with. The idea of a stack of these units being searched, filtered and manipulated and searched again have been established and an integrated interface has been sketched. Check the final design (PPT) and the accompanying design rationale (DOC) (internal only).

Singapore University of Technology and Design

Design by Hyowon Lee 2012