Thursday, February 27, 2014

Week 8 Reading Notes: User interaction and visualization

1.       Effective human-computer interface
(1)     Design principles: offer informative feedback
(2)     Reduce working memory load
(3)     Provide alternative interfaces for novice and expert users.
Visualization, Evaluating Interactive Systems
2.       Information Access Process
3.       Starting Point
List of collections, overviews; examples, dialogs and wizards; automated source selection
4.       Query specification
5.       Context
Document surrogates, query term hits, superbook, categories, using hyperlinks to organize retrieval results, tables
6.       Relevance judgements
Interfaces, studies of user interaction, group relevance judgments, pseudo-relevance judgment feedback
7.       Interface support for search process

String matching, window management, example systems

Week 7 Muddist Points

No muddiest point for this week

Monday, February 17, 2014

Week 7 Reading Notes: Relevance feedback and query expansion

1.       The issue of synonymy: impact on the recall of most information retrieval systems.
Solution: Query refinement,automatically or user in the loop
(1)     Global methods include:
Query expansion/reformulation with a thesaurus or WordNet
Query expansion via automatic thesaurus generation
Techniques like spelling correction
(2)     Local methods adjust a query relative to the documents that initially appear to match the query. The basic methods here are:
• Relevance feedback
• Pseudo relevance feedback, also known as Blind relevance feedback

2.       Relevance feedback (RF): involve the user in the retrieval process so as to improve the final result set.
(1)     Basic procedure:
The user issues a (short, simple) query.
The system returns an initial set of retrieval results.
The user marks some returned documents as relevant or nonrelevant.
The system computes a better representation of the information need based on the user feedback.
The system displays a revised set of retrieval results.
(2)     The Rocchio algorithm

(3)     very effective at improving relevance of results

Week 6 Muddist Points

When measuring ranked retrieval, by computing the precision at fixed recall points, it's possible to compare the performance of different IR systems. But since both precision and recall are very important evaluation measurements, how to combine these two together to make a better judgement? Are there methods act the same as the harmonic mean?

Friday, February 14, 2014

Week 6 Reading Notes: Evaluation

Information retrieval has developed as a highly empirical discipline, requiring careful and thorough evaluation to demonstrate the superior performance of novel techniques on representative document collections.

1.       Test collection:
A document collection;
A test suite of information needs, expressible as queries;
A set of relevance judgments, standardly a binary assessment of either relevant or nonrelevant for each query-document pair.
Standard test collections: Cranfield, Text Retrieval Conference (TREC)
2.       Evaluation of unranked retrieval sets
(1)     Precision (P) is the fraction of retrieved documents that are relevant
(2)     Recall (R) is the fraction of relevant documents that are retrieved
(3)     accuracy is the fraction of its classifications that are correct
(4)     F measure, which is the weighted harmonic mean of precision and recall
3.       Evaluation of ranked retrieval sets
precision-recall curve, interpolated precision
4.       Assessing relevance
Pooling: where relevance is assessed over a subset of the collection that is formed from the top k documents returned by a number of different IR systems
marginal relevance: whether a document still has distinctive usefulness after the user has looked at certain other documents
5.       System quality and user utility
(1)     User utility: a way of quantifying aggregate user happiness, based on the relevance, speed, and user interface of a system

(2)     Refining a deployed system: A/B TEST

Week 5 Muddist Points

What is the biggest difference between Language Model and Classic Probabilistic Model, since both of them heavily use probability?

Monday, February 3, 2014

Week 4 Muddiest Points

In vector space model, should log-frequency weighting be calculated before or after stop words removal? how about normalization?

Week 5 Reading Notes: Matching models- probabilistic and language model

Probabilistic information retrieval
estimate the probability of a term t appearing in a relevant document P(t|R = 1), and that this could be the basis of a classifier that decides whether documents are relevant or not.
1.    basic probability theory: prior probability, posterior probability, odds
2.    The Probability Ranking Principle: The 1/0 loss case
3.    Binary dependence model
(1)  pt = P(xt = 1|R = 1,~q): probability of a term appearing in a document relevant to the query; ut = P(xt = 1|R = 0,~q): be the probability of a term appearing in a nonrelevant document
(2)  Determine a guess for the size of the relevant document set.
(3)  Improve our guesses for pt and ut.
(4)  Go to step 2 until the ranking of the returned results converges.

Language models
A document is a good match to a query if the document model is likely to generate the query, which will in turn happen if the document contains the query words often.

The basic language modeling approach builds a probabilistic language model Md from each document d, and ranks documents based on the probability of the model generating the query: P(q|Md).

1.    Language models:
Generative model
Language of automation: the full set of strings that can be generated from formal language theory
Language model: a function that puts a probability measure over strings drawn from some vocabulary
unigram language model:  Puni(t1t2t3t4) = P(t1)P(t2)P(t3)P(t4)

2.    The query likelihood model
(1)   Goal: rank documents by P(d|q)
(2)   Method: using Bayes Rule P(d|q) = P(q|d)P(d)/P(q)
à P(q|d), the probability of the query q under the language model derived from d;
a query would be observed as a random sample from the respective document model
(P(d) and P(q) is uniform and therefore usually ignored)
à multinomial Naive Bayes model(page 263) 
(3) estimate P(q|Md): count up how often each word occurred, and divide through by the total number of words in the document d.
(4) Optimization: smooth probabilities in our document language models by to discount non-zero probabilities and to give some probability mass to unseen words.