Tuesday, April 11, 2023

Decaying Window

This algorithm allows you to identify the most popular elements (trending, in other words) in an incoming data stream.


The decaying window algorithm not only tracks the most recurring elements in an incoming data stream, but also discounts any random spikes or spam requests that might have boosted an element’s frequency. In a decaying window, you assign a score or weight to every element of the incoming data stream. Further, you need to calculate the aggregate sum for each distinct element by adding all the weights assigned to that element. The element with the highest total score is listed as trending or the most popular.


  1. Assign each element with a weight/score.
  2. Calculate aggregate sum for each distinct element by adding all the weights assigned to that element.


In a decaying window algorithm, you assign more weight to newer elements. For a new element, you first reduce the weight of all the existing elements by a constant factor k and then assign the new element with a specific weight. The aggregate sum of the decaying exponential weights can be calculated using the following formula:

t1i=0ati(1c)i

Here, c is usually a small constant of the order  or . Whenever a new element, say at+1 , arrives in the data stream you perform the following steps to achieve an updated sum:

1. Multiply the current sum/score by the value (1−c).
2. Add the weight corresponding to the new element.

Generally i is considered as 1 

Weight decays exponentially over time

In a data stream consisting of various elements, you maintain a separate sum for each distinct element. For every incoming element, you multiply the sum of all the existing elements by a value of (1−c). Further, you add the weight of the incoming element to its corresponding aggregate sum.

A threshold can be kept to, ignore elements of weight lesser than that.

Finally, the element with the highest aggregate score is listed as the most popular element.

Example

For example, consider a sequence of twitter tags below:
fifa, ipl, fifa, ipl, ipl, ipl, fifa

Also, let's say each element in sequence has weight of 1.
Let's c be 0.1
The aggregate sum of each tag in the end of above stream will be calculated as below:

fifa

fifa - 1 * (1-0.1) = 0.9
ipl - 0.9 * (1-0.1) + 0 = 0.81 (adding 0 because current tag is different than fifa)
fifa -  0.81 * (1-0.1) + 1 = 1.729 (adding 1 because current tag is fifa only)
ipl - 1.729 * (1-0.1) + 0 = 1.5561
ipl - 1.5561 * (1-0.1) + 0 = 1.4005
ipl - 1.4005 * (1-0.1) + 0 = 1.2605
fifa -  1.2605 * (1-0.1) + 1 = 2.135

ipl

fifa - 0 * (1-0.1) = 0
ipl - 0 * (1-0.1) + 1 = 1 
fifa -  1 * (1-0.1) + 0 = 0.9 (adding 0 because current tag is different than ipl)
ipl - 0.9 * (1-0.01) + 1 = 1.81
ipl - 1.81 * (1-0.01) + 1 = 2.7919
ipl - 2.7919 * (1-0.01) + 1 = 3.764
fifa - 3.764 * (1-0.01) + 0  = 3.7264

In the end of the sequence, we can see the score of fifa is 2.135 but ipl is 3.7264
So, ipl is more trending then fifa
Even though both of them occurred same number of times in input there score is still different.

Advantages of Decaying Window Algorithm:

  1. Sudden spikes or spam data is taken care.
  2. New element is given more weight by this mechanism, to achieve right trending output.

Similar Blog : Nitin Decaying Window

---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.

Sampling + Reservoir Sampling Algorithm + Hot List Problem + Frequent Algorithm +Space Saving Algorithm


Sampling : subset of data to be analyzed such that it has approximately the same properties of the original data

Random Sampling:Each element has equal probability of being selected
Reservior Sampling : A technique to maintain online random sample

Reservior Sampling Algorithm

1.Used to maintain an online random sample
2.Maintains a sample of size k, known as reservoir
3.Every new element has a probability k/n of replacing an old element in the reservoir
4.Let us say, k=1: An item in a stream of length n replaces the element in the reservoir with probability 1/n




Hot List Problem

Goal to Achieve

1.continuously maintain a list of the top-k most frequent elements in a stream
2.rank the items and absolute count is immaterial

Hot List Problem +  Frequent Algorithm 



Hot List Problem - Space Saving Algorithm




---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.

Saturday, April 8, 2023

Cassandra

Cassandra is a NoSQL database for write-heavy workload and eventual consistency












---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. 
Host/author is not responsible for these posts.

Wednesday, April 5, 2023

Link Analysis

Link analysis for most IR functionality thus far based purely on text

Scoring and ranking
Link-based clustering – topical structure from links
Links as features in classification – documents that link to one another are likely to be on the same subject

Assumptions
1.Reputed Sites
2.Annotation of Target


Page Rank 
 Scoring measure based on the link structure of web pages
 PageRank = long-term visit rate = steady state probability.
 
Markov chains
 Markov chain is a discrete-time stochastic process: a process that occurs in a series of time-steps in each of which a random choice is made.
 A Markov chain consists of N states, plus an N × N transition probability matrix P.
 state = page
 At each step, we are on exactly one of the pages.



Teleporting – to get us out of dead ends






HITS – Hyperlink-Induced Topic Search

Hubs:  A hub page is a good list of links to pages answering the information need
Authorities:  An authority page is a direct answer to the information need

A good hub page for a topic links to many authority pages for that topic
A good authority page for a topic is linked to by many hub pages for that topic




---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.

Tuesday, April 4, 2023

Recommender Systems

Long tail effect :  Many popular products are found in both Retail / Online 
                    less popular products are found only online  

---------------------------------
Modelling Recommender Systems 

U = {USERS}
I = {ITEMS}
F is a utility function, measures the usefulness of items I to user U.
F:UxI -> R where R is the rating 

--------------------------------

Characteristics of recommendation system



Types of recommender system 





SVD - Singular value decomposition







---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.

Sunday, April 2, 2023

Web Crawler

Web Crawler : It is a software for  downloading pages from the Web. Also known as Web Spider, Web Robot,  or simply Bot.

Web Crawler Applications

1.Create an index covering broad topics (General  Web search )
2.Create an index covering specific topics (Vertical  Web search )
3.Archive content :(Web archival, URL: http://www.archive.org/ )
4.Analyze Web sites for extracting aggregate  statistics (Web characterization )
5.Keep copies or replicate Web sites  (Web mirroring-daily or weekly)
6.Web site analysis (broken links, site not available)

Crawler Taxonomy




Basic Web Crawler Architecture
 3 components in Web Crawler

Scheduler - Maintains a queue of URLS to Visit 
downloader - downloads the pages
Storage - makes indexing of pages and provides scheduler with metadata on the pages retrieved


Crawling complications 

1.Malicious Pages - spam pages & spider traps (crawler traps)
2.Non-malicious pages - 
      Latency / Bandwidth  to remote servers vary 
      Webmasters stipulations -- how deep one has to crawl in a website.
3.Site mirrors and duplicate pages 
4.Politness --> how frequently we should hit the server
         Explicit politeness: specifications from  webmasters on what portions of site  can be crawled
                                            robots.txt
        Implicit politeness: even with no  specification, avoid hitting any site too  often
 
--------------------------------------------------------------------------------------------------------------------- 
  
Crawler should be distributed / scalable / performance & efficiency 
                   fetch higher quality pages first
   continuous operation -- fetch fresh copies of previous pages 
   Extensible : Adapt to new data formats.
 
robots.txt --> avoids overloading of the site.

----------------------------------------------------------------------------------------------------------------------
URL frontier





1.The URL frontier is the data structure that holds and  manages URLs we’ve seen, but that have not been  crawled yet.
2.Can include multiple pages from the same host
3.Must avoid trying to fetch them all at the same time
4.Must keep all crawling threads busy

Considerations 

Politeness: do not hit a web server too  frequently
Freshness: crawl some pages more often  than others


-------------------------------------------------------------------------------------------------------------
Basic crawl architecture 


-------------------------------------------------------------------------------------------------------------



---------------------------------------------------------------------------- 
All the messages below are just forwarded messages if some one feels hurt about it please add your comments we will remove the post. Host/author is not responsible for these posts.