Understanding latent semantic indexing is quite complex and
usually requires a degree in math in order to figure out and understand.
There are a few methods that can be used in order to index and
retrieve all the relevant pages of the users query.
The obvious method of retrieving the relevant pages is by
matching words from a search query to the same text found within the web pages that are available.
The problem with simple word matching is that they are extremely
inaccurate. This is because there are so many ways for a user to express the desired concept, which they are
looking for.
This is known as synonymy. This also happens because many words
have multiple meanings. This is known as polysemy.
With synonymy, the user’s query may now actually match the text
on the relevant pages. They will be overlooked and the problem of polysemy means the terms in a user’s query will
often match terms in irrelevant pages.
Latent semantic indexing, or LSI is an attempt to overcome this
problem. By looking at the patterns of words distributed across the entire web.
Pages are considered that have many words in common and are
thought to be close in semantically close in meaning.
Pages that contain a few words in common are semantically
distant. The result is a relatively accurate and similar value that has calculated for every content word or
phrase.
In response to a query, the LSI database will return pages it
thinks to be correct and relevant to the query’s search.
The LSI algorithm doesn’t understand anything about word
meanings and does not require an exact match to return useful web pages.
Latent semantic indexing, by definition, is a mathematical or
statistical technique for extracting and representing the similarity of meaning of words and passages by analysis
of large bodies of text. The definition may
be a little difficult to understand, but basically latent semantic indexing takes the keywords you put into your
search engine and go through each and every web page searching out the best results for the key words you are
seeking.
Simply put, Latent semantic indexing is a technique that
projects queries and documents into space with latent semantic dimensions.
In the latent semantic space, a query and a document are similar
even if they don’t share any of the same terms if their terms are semantically similar.
LSI is similarly metric to word overlap measures. LSI has fewer
dimensions than the original space and is a method for dimensionality reduction. There are several different mappings for latent semantic
indexing from high dimensional to low dimensional spaces. LSI chooses the optimal mapping in a sense that minimizes the
distance. Choosing the number of dimensions is a unique problem. A reduction can remove much of the noise while
keeping too few dimensions may lose important information.
This reduction takes a set of objects that exist in a
high-dimensional space and rearranges them and represents them in a lower dimensional space instead.
They are often represented in two or three-dimensional space
just for the purpose of visualization. Latent Semantic Indexing, or LSI is a mathematical application technique
sometimes known as singular value decomposition.
The projection into the LSI space is chosen so that the
representations in the space of origin are changed as little as possible. Then it is measured by the sum of the
squares of the difference.
There are several different mappings for latent semantic
indexing from high dimensional to low dimensional spaces.
LSI chooses the optimal mapping in a sense that minimizes the
distance. Choosing the number of dimensions is a unique problem.
A reduction can remove much of the noise while keeping too few
dimensions may lose important information. LSI performance is improved considerably after ten to twenty dimensions
and peaks at seventy to one hundred dimensions.
Then it slowly begins to diminish again. There is a pattern of
performance that is observed with other datasets as well.
Latent semantic indexing considers pages that have many words in
common and close in meaning, sorts them out, and presents them to the seeker.
The result is an LSI indexed database with similarity and values
that are calculated for every content word and phrase. In response to a query, the LSI database returns the pages
it sees fit best to the keywords.
The algorithm doesn’t understand anything about what the words
mean and does not require an exact match to return results that are useful to the seeker.
|