Measures for Search Engine
1. Latency of search
2. Expressiveness of query Language
Ability to express complex information needs
Speed on complex queries
Measuring Releavance based on User
Three elements:
1. A benchmark document collection
2. A benchmark suite of queries
3. An assessment of either Relevant or Nonrelevant for each query and each document
TREC Benchmark - Text REtrieval Conference
Evaluating IR system -- need to verify if retrived document is relaveant or not
Difficulties in Evaluating IR Systems
1.Effectiveness is related to the relevancy of retrieved items.
2.Relevancy is not typically binary but continuous.
3.Even if relevancy is binary, it can be a difficult judgment to make.
Relevancy, from a human standpoint, is:
Subjective: Depends upon a specific user’s judgment.
Situational: Relates to user’s current needs.
Cognitive: Depends on human perception and behavior.
Dynamic: Changes over time.
Evaluating IR systems
1. Gold Standard (Human Labeled Corpora) : Using Humans to create Gold standard - Manual
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.