Monday, April 25, 2016

Tues, Apr 26: Quizz: Targeted crowdsourcing with a billion (potential) users

Quizz: Targeted crowdsourcing with a billion (potential) users. Ipeirotis, Panagiotis G., and Evgeniy Gabrilovich. Proceedings of the 23rd international conference on World wide web. ACM, 2014.

4 comments:

  1. The paper suggests an approach for crowd-source that is predictable, does not involve monetary rewards and targets specialist users. The system is a quizz that consists of a question and different options. It uses the structure for advertisement in the Web to target specific users. In addition, according to the performance of the user one in two types of question is selected, a calibration one or a collection. The system uses MDP to model the strategy for the questions and defines also a information gain technique to evaluate performance.
    In the experiments the system converges to a more specific group of contributors and consequently a better performance. When compared to Mechanical Turk or oDesk(now Upwork)it produces more correct answers per question resulting in more reliable data.

    ReplyDelete
  2. This paper describes a method of crowdsourcing which takes advantage of advertising techniques to attract high-quality, unpaid users. They construct a "game" where users answer quiz questions and are rewarded with the correct answer (and in some cases, points).
    The authors of the paper changed the feedback being supplied to the advertising service; rather than optimizing for clicks, the authors wanted to increase the number of people who clicked the link and answered a quiz question. This helped to attract more knowledgeable and motivated users.
    The authors, recognizing that users were not going to answer quiz questions forever, wanted to optimize information gain by choosing whether to display a calibration or collection question to a user. They model the information gain with an MDP, using the user's number of correct answers, incorrect answers, and number of collection questions asked.
    This paper also experimented with different ways to reward users for correct answers and explored the impact of these changes on the total number of questions answered, correct answers given, and score (information gain).

    Discussion:
    How does online advertising work? Is the advertiser paid only if someone clicks the link?

    ReplyDelete
  3. This paper introduces a new system for high-quality crowdsourcing based on the idea that targeted unpaid workers have a stronger incentive to produce high quality results and may have more domain knoledge than paid worker from sources like amazon mechanical turk. The authors frame their crowdsourcing tasks as online quizzes, which consist of both "calibration" questions with known answers and "collection" questions with unknown answers. The authors develop a information-theory based metric to estimate the quality of a users answers from their answers to the callibration questions. The authors further set up a markov decision process for user actions to determine how to schedule questions to optimize the amount of correct answers given. Finally, the authors experimented with different incentives to try to encourage users to stay with the application. This system was combined with a conversion optimizer for the advertising system.
    In their results, the authors analyzed the effects of a number of their choices and found that the targeted advertising system greatly increased the quality and quantity of answers given. User incentives, such as showing the correct answer also improved participation. Finally, they demonstrated that the quality of users found with their system is much higher for specific tasks than on mechanical turk.

    What would the effect be of allowing users to challenge their friends through social media? (people in the same social circles may have similarly good domain knowledge) Could using different types of tasks improve user engagement for this type of system?

    ReplyDelete
  4. The authors introduce a novel system (Quiz) for crowd-sourcing answers to hard questions using a quiz based system that appears on online advertising networks. Users are drawn to the quizzes using contextual advertising. The quiz system uses information theory to decide between asking "calibration" or "collection" questions, where calibration questions are designed to gauge the relative knowledge of particular users, and collection questions are questions to which there is no known answer, and are the point of the Quiz system. By identifying high performing users (experts), and directing collection questions at them, the Quiz system is able to find answers to unknown questions with a higher degree of certainty than randomly sampling the crowd.

    User credibility does not necessarily correspond with the usefulness of answers. For example, with disease symptoms, it is conceivable that an average user with unique life experience might know of a not commonly known symptom of some disease, however this sort of system would not prioritize this user. How might the system be modified to prioritize "unique" responses?

    Couldn't one get similar results simply by having all users answer a fixed number of calibration questions? Is the extra effort (math) put in by balancing the calibration questions and collection questions computationally defensible?

    ReplyDelete