Please disable Adblockers and enable JavaScript for domain CEWebS.cs.univie.ac.at! We have NO ADS, but they may interfere with some of our course material.
Q3 Answers
- Although mentioned sometimes, in the visualisation of the different level it is not really clear. So the whole process in design and validation is a foward and backward stepping through the levels? Is that correct?
- Yes, ideally.
- In the validation of the design, one approach is to ask how often a tool has been adopted. Isn't it a bit risky to do that kind of validation? If there were no adaptions at all, a whole system that no one uses was created. This sounds a bit useless to me.
- It depends on your goals. Sometimes you just want to make a prototype system as a demonstration of your technique/ideas and then adoption isn't important. Plus adoption requires maintenance/support/etc and someone needs to do that.
- Is it correct that the difference between field study and lab study is the testing environment? filed study in "normal" surrounding working as familiar for the domain expert, lab study in a different place just to test the software and performing specified tasks.
- That's one major component. The other one is how you evaluate success. Lab studies typically have a quantitative measurement which you can compare using mathematical techniques. Field studies typically have a more subjective assessment.
- How useful is it to do a lab study. In the end users will want to do their work and not the tasks, the software was created for.
- It depends on the question you're trying to answer. The idea of a lab study is to answer questions about the general populace or comparing interfaces and is really useful if you have some objective measure of "better" like "faster" or "fewer errors." Usually you're not testing an entire system either but some small component so you can attribute, say, a faster workflow to that change instead of one of many changes.
- How do you choose the right validation methods, apart from based on the level you want to test?
- Usually you start with the question you want to answer and then figure out the validation method that matches this question. This takes a lot of time and experience though. Generally, if you can come up with a numerical measure that makes sense then a lab study or maybe even a purely mathematical validation may work. Otherwise you do case studies or field studies.
- In Bezug auf die "Four Levels of Design": Für mich klingt es so, als MÜSSE man eine field study durchführen, um das erste Level "Domain Situation" abzuschließen und in weiterer Folge ein solides Vis Tool zu erstellen. Aber wenn ich keine Möglichkeit habe diese durchzuführen (sehr weite Distanzen, nicht genügend Mittel, etc.) wirkt sich das ja anscheinend auf die folgenden Levels negativ aus und im Endeffekt werde ich nie ein, für die Zielgruppe ideales, Tool entwickeln können.
- Ja, genau. Aber man kann über Skype oder so sprechen wenn es geht. Im Allgemeinen, wenn man eine field study machen will, darf nach die Kollaborateure sorgfältig suchen.
- Ich verstehe bei der "Idiom Validation" nicht was mit "justify the design" gemeint ist. Muss ich mich hier rechtfertigen warum User X beispielsweise mit der Tastatur navigieren soll (interaction idiom) oder warum ich ein Tree-Model verwende (visual encoding idiom)? Wenn ja, wozu?
- This means things like explain why use a bar graph versus a pie chart, or a particular color map, or (as you say) use a particular navigation method. Many times this evaluation is done by comparing *against* other design options that you may have thought of. This is done for 2 reasons: for others to evaluate whether or not you made good design choices that fit the goals of the system and for others who are trying to design similar systems, they can look at what you did and the reasons for your choices so that it saves them some missteps in the future.
- Wir haben schon gehört, dass Visualisierung ein sehr iterativer Prozess ist. Jedoch kann ich mir vorstellen, dass es sehr frustrierend sein muss, wenn man ein komplettes System erstellt nur um dann am Ende zu erfahren, dass man sich evtl. um die falschen Dinge gekümmert hat oder die User im Endeffekt nicht das tun können was sie eigentlich sollten. Dann muss man ja von der Algorithm Ebenen per downstream wieder langsam zurück und nach vor hanteln bis man auf jeder Ebene des Modells eine zufriedenstellende Validation erreicht. Aber ich denke, dass man oft nicht gleich alle Informationen bekommt. Jeder kennt "spezielle" Kunden die ja bekanntlich immer König sind…
- Yes, true. That's the importance of doing something like paper prototypes or some other easily disposable method. You will be making a lot of these in the beginning and probably throwing most of them away so you don't want to spend a lot of time on implementation details. But visualization researchers also must be careful about selecting good users with interesting problems that can provide good feedback and not just be relegated to a "code monkey" position where you just implement what the users demand.
- Why do well-designed tools fail to be adopted and some poorly designed tools win in the marketplace?
- Difficult question :) But there's a lot more to adoption than design like marketing, support, and momentum.
- Why "quality metrics", when some of them remain unproved conjectures?
- Why should metrics be proven? How do you "prove" something improves someone's workflow? We're not able to answer those questions yet
- Concerning flow maps: Why is there no actual discussion of who uses them and why? I thought that target groups are always necessary.
- That's a good point! Sometimes the target group is assumed. In some cases we know the typical issues with some visualization methods or data. Then we can design a vis technique that attempts to solve those issues without having to go back to the target user group. But you're right, it's important to keep assumptions about users' needs and how to address them when designing *any* system, not just a visualization one.
- In which context stands the algorithm to the validation of a vis? Of course, my algorithm can be slow and that's a disadvantage, but can I really say just because of a slow algorithm my vis is not validate?
- It depends actually. There is this notion of "cognitive dissonance" which you want to avoid. It means that the computer isn't reacting fast enough for you to engage with it then you loose track of what you're thinking about. Imagine watching a movie at 1 frame per second instead of 30. It would be really difficult to figure out what's going on.
- While reading the chapter about LinLogs I was thinking about the provided examples from the d3 tutorial. There is a visualization of the viennese metro system. My question is, if it's really a good way to visualize a content like this with nodes and edges you can drag and drop at any position you'd like to? I mean, i see the connectivity, but I lose all of the information about locations.
- that's true! the tutorial was just an example of interaction and a network graph layout. However, it also showed what happens if you just apply a naive force-directed graph layout. But sometimes you have to make sacrifices with accuracy to get things to be comprehensible. Subway maps, for example, often simplify the actual routes the trains take and the stop locations in order to make the connections easier to see.
- In chapter 4.7.6 I didn't really get the visualization of the 2-Band Mirrored. For example, if there is a dark blue 'drop' drawn over a blue 'drop' with a lower occupacy, does it mean, that the darker part can be switched on the top of the brighter one to show, that it's just out of the range of the scale?
- The dark blue part is basically the cut off bit of the light blue peak.
- Is it more important/useful to concentrate on one of the four levels of Design or to try to optimize all a bit but nothing too much?
- It would be best to concentrate on all four levels and optimize all of them :) But there isn't enough time to do that and it's not always necessary. Usually, start with the question you want to answer and then find what level in the hierarchy addresses that best.
- Does it also depend on the used framework to use top down or bottom up?
- I'm not sure of the question
- Is a survey a good tool to find out if domain situation could be improved?
- Possibly, but you have to be very careful about how you design the survey questions. If you ask someone directly what they need they'll give you an answer about what they "think" they need but it will be an incremental change at best. The trick is to figure out what fundamental problems people have and build solutions to those. Sometimes your solution may take some getting used to since they represent a different way of thinking than what the domain person is used to. Usually you need to do something like a field study so you can observe people and figure out their needs.
- Warum werden die Levels of Design gerade als "nested visualisiert und nicht anders? Pfeil-Prozess Ablauf?
- It's because the inner blocks depend on the outer blocks but in turn the validation of the outer blocks depend on the validity of the inner blocks. You could write it as an arrow-style diagram as: Domain situation -> Task abstraction -> Visual encoding -> Algorithm -> Validate algorithm -> Validate visual encodings -> Validate task abstraction -> Validate domain situation. This is a bit more compact as a nested structure.
- Ch. 4.3.2 "Questions from very different domain situations can map to the same abstract vis tasks. Examples of abstract tasks include brows- ing, comparing, and summarizing."Gibt es frameworks/tools, die die Evaluation dieser wiederkehrenden Aufgaben erleichtern? Und wenn ja, welche?
- net so viel... Gibts frameworks/tools für z.B. "crowd sourced user studies" am Amazon mechanical turk (https://facebook.github.io/planout/) oder so. Man muss jetzt oft roll their own tools machen
- In welchen der Design Levels ist der Löwenteil an Zeit aufzuwenden? Ich würde eines aus den ersten beiden Tiers vermuten…
- bei der Forschung ist der Löwenanteil an Zeit in lab studies und field studies.
- Why can be the four levels of vis design considered in analysis only separately, if it's an iterative process and an example for the principle of "design as redesign"?
- It's for fault isolation. So you need to make sure each level is valid so you can truly understand where the problem lies. Otherwise you may spend a lot of time trying different visual encodings only to later realize that you mischaracterized the tasks the user wants to perform.
- What kind of transformed data types could provide a better solution for a visual encoding than the original form of the dataset?
- viewing say averages and variances rather than the raw items for one example.
- In downstream validations, how can we be sure that we've chosen a poor visual encoding or algorithm and it's not the abstraction or the interaction technique that has been chosen poorly?
- You rely on previous research and experience mostly but you can also validate the downstream methods individually
- What does the term “domain” exactly means in the Vis literature?
- The application scenario for which you're designing the vis tool. So, for example, a trading system would be for the financial domain.
- Why is it sometimes tricky to untangle connections between algorithms and idioms?
- Because a poor algorithm choice can mask a good visual encoding. This is especially true with interaction where an unacceptably slow response can ruin the experience.
- Why are the downstream validations necessary?
- To ensure that those downstream elements are not causing the upstream validations to fail
- Validation of what? Data? Whether the visualisation shows the real data?
- That's part of it. but also whether the visualization correctly addresses the users' needs. A lot of this is discussed in chapter 1 as well.
- What is the difference between problem-driven and technique-driven work?
- Problem-driven work is where you start with a question or problem and then develop a technique to solve it. Technique-driven work you start with a technique and then go around applying it in different situations to see when it works and when it doesn't.
- Is a person doing the vis usually part of the research team or is it a person specialized on vis working for a field she/he is no expert in?
- It depends and both have strengths and weaknesses. A vis-specific researcher will bring a lot of knowledge of visualization methods and knowledge from other domains. A domain specialist has a lot of domain knowledge but may not be up to date on the latest visualization techniques and what works and what doesn't work well. The other nice part of bringing in an external collaborator is that you get a unique perspective.
- Asking users about their requirements is insufficient, but I do not think that designers always have the luxury to observe their users. Is there any other way to find out which requirements for the visualization really exist other than observing the user or throwing new designs at them until you find out what is really required (iterative approach)?
- That's true! That's one of the reasons we publish design studies (sometimes called case studies). The idea is that you can look and see what others have tried and see what worked and what didn't for similar problems to your own. That's also why it's so important when doing a design study to write up what did and didn't work and why.
- The book says that it is an important consideration what dataset to use for testing and that there are standard benchmarks. How exactly does that work in the VIS-community? Are there certain datasets everyone agrees are good for testing, or are those benchmarks something different? Also is there the problem that one 'overfits' their algorithm to the benchmark or is that not a realistic problem?
- I'm not aware of standard datasets for vis. Many times people just use the machine learning datasets if those are appropriate.
- At the end of this chapter Munzner says that rigorous formal testing of algorithms for correctness is usually omitted. Has that ever been a problem in the VIS community, or is the implicit validation against incorrectness always enough?
- Sometimes it's a problem. For example, the original marching cubes paper could produce 3D objects that are not solid and this spawned a whole set of papers trying to fix this problem. The problem though, is how to define formal correctness at such a high level algorithm and then validate against it.
- All the different examples use different validation methods on different levels. This is obviously because the focus of the work is also different. Are there standard approaches to validation? For example, if you present a new visual encoding, just do a field study before and after the introduction of the new visualization to collect anecdotal evidence and justify your visual encoding. Or does one just do the validations others have done before? Or do you just follow your experience and common sense?
- Yes, and that is where the nested model helps alot. A new visual encoding is typically validated with a lab study, for example. See section 4.6.5 for an important discussion on mismatches between validation method and benefit of the visualization.
Letzte Änderung: 08.04.2015, 15:18 | 2629 Worte