What is Crawling?
Ans. Crawling or web crawling refers to an automated process through which search engines filtrate web pages for proper indexing.
Web crawlers go through web pages, look for relevant keywords, hyperlinks, and content, and bring information back to the web servers for indexing.
As crawlers like Google Bots also go through other linked pages on websites, companies build sitemaps for better accessibility and navigation.
What is Indexing?
Ans. Indexing starts when the crawling process gets over during a search. Google uses crawling to collect pages relevant to the search queries and creates an index that includes specific words or search terms and their locations.
Search engines answer queries of the users by looking up to the index and showing the most appropriate pages.
What is On-Page SEO?
Ans. On page, SEO refers to all the activities performed within the websites to get higher ranking and more relevant traffic from the search engines.
On page, SEO is related to the optimization of the content as well as the HTML source code of any web page. Some of its aspects include meta tags, title tags, meta description, and heading tags.
What is Off Page SEO?
Ans. Off page, SEO relates to the other aspects that influence the search ranking of websites on the Search Engine Result Page.
It refers to the promotional activities, such as content marketing, social media and link building performed outside the boundaries of any web page to improve its search ranking.
What are LSI keywords?
Ans. LSI keywords or Latent Semantic Indexing are semantically associated with the main keyword that users enter over the search engines.
What is Canonical URL?
Ans. Canonical URLs relate to the concept of selecting the best URL for the web pages that the visitors want to see. Also, known as canonical tags, these URLs help in content syndication when multiple versions of a same page become available over the Internet. Thus, it is used to resolve issues related to content duplication.
For example, most people would consider these the same urls:
www.example.com
example.com/
www.example.com/index.html
example.com/home.asp
example.com/
www.example.com/index.html
example.com/home.asp
But technically all of these URLs are different.
What is SEO friendly URL?
Ans. SEO friendly URLs are used to optimize the structure and word usage in URLs so that the process of indexing a website by search engines become improved.
SEO techniques, such as putting keywords and having proper length and file structure in the URLs, help in improving website ranking and enhancing website navigation.
Search engines (Google, Bing, Yahoo etc.) and users may have problems with complicated URLs. Clean and simple URL helps users and search engines to understand a page topic easily.
What are meta descriptions?
Ans. Meta descriptions are also called HTML attributes which should provide an accurate description of any web page content. These descriptions act as preview snippets of the web pages over the SERP page.
Meta descriptions, which should ideally be within 150 characters, enhance the promotional value of the web pages, and can gain greater user click-throughs, if performed correctly.
What are backlinks?
Ans. Backlinks are also called incoming links that help users to shift from one web page to the other web pages. These links play an important part in SEO.
When Google search engine views multiple quality backlinks to a page, it considers the page to be more relevant to the search query, which helps in its indexing process and improves its organic ranking on SERPs.
What are the most important Google ranking factors?
Ans. According to Andrey Lipattsev, the Search Quality Senior Strategist at Google, the top 3 ranking factors affecting the search engine algorithm of Google are:
#1 Content
#2 Backlinks
#3 RankBrain
What is HTML Sitemap?
Ans. An HTML sitemap comprises of one single HTML page that bears the links of all the web pages of any specific website. This sitemap holds the foundation of all web pages of any website.
HTML sitemap contains all formatted text files and linking tags of any website. It is particularly useful when you have a large website with multiple web pages, because it helps you to improve the navigation of your website by listing all the web pages in one place in a user-friendly manner.
What is XML Sitemap
Ans. XML or Extensible Markup Language is primarily created to facilitate the functionality of the search engines.
A good XML sitemap informs the search engines about the number of pages present on a specific website, the frequency of their updates and the time of the last modifications performed on them, which helps in proper indexing of the website by the search engines.
How can I see what pages are indexed in Google?
Ans. There are two ways to see if the web pages of any specific website are indexed by Google.
1) One can check the Google Index Status of any specific website through Google Webmaster tools. After adding the website on the dashboard and verifying the ownership, click on the tab “Index status” would show the numbers of pages indexed by Google.
2) One can also perform a manual search on Google by typing on Google search bar site:domainname.com, and the number of pages indexed would reflect on the SERP.
For example:
What are 404 errors?
Ans. 404 errors are considered one of the most potential impediments in the way to successful SEO. When a specific URL is renamed or becomes non-existent, any links connecting to that URL would result in 404 errors.
An interesting thing is, Google does not penalize any website for 404 errors. However, if the search engines consistently fail to crawl the internal links of any website, the search ranking of that website is very likely to drop with low traffic.
What is anchor text?
Ans. Anchor text denotes to a visible hyperlinked text that can be clicked through. Such hyperlinked texts link to different documents or locations available on the web.
These texts are often underlined and blue in color, but different colors might be given with the changes in the HTML code.
Anchor texts can be of different types such as keyword rich anchor, generic anchors, branded anchors, image anchors etc.
What is Google Webmaster Tools/Google Search Console?
Ans. It was on 20th May 2015 that Google changed the name of Google Webmaster tools to Google Search Console.
Google Search Console provides free web services to the webmasters by enabling them to monitor and sustain the online presence of their specific websites.
Google Search Console helps the business owners, SEO experts, site administrators, and web developers to see the crawl errors, crawl status, backlinks and malware with a click of a button.
Google Search Console helps the business owners, SEO experts, site administrators, and web developers to see the crawl errors, crawl status, backlinks and malware with a click of a button.
What is 301 Redirect?
Ans. 301 redirect is considered as one of the most effective ways of performing redirects on any website. When a web address has been changed permanently, it is best to use 301 redirects which will redirect all the users to the new web address.
With this redirect, the search engine passes all the values associated with the old website to the new website. Moreover, 301 redirect also pushes all the link juice to the new web address that keeps the ranking of the website unaffected.
What is Google Analytics?
Ans. Launched in 2005 by Google, Google Analytics is one of the most empowering analytical tools in SEO, which helps the webmasters to track and monitor the traffic on their websites.
What is Google PageRank?
Ans. Google PageRank was a calculative software which determined the relevancy of one web page based on the number of quality backlinks it contains.
In other words, PageRank views backlinks as votes, which means if Page X links to Page Y, Page Y is voted by Page X. The job of PageRank is to interpret both the page content and find relevancy. The higher is the relevancy level, greater importance is ascribed to a certain page by Google which positively affects the organic result of that web page.
Nofollow: Nofollow links attributes do not allow search engine bots to follow link.That means if the website owner is linking back to you with nofollow attributes, it does not pass on link juice. Only Humans will be able to follow the links. Though some time back Google made it clear that they don’t consider nofollow link attributes but weight of such links are really less. Even though, it’s a good practice to use Nofollow link attribute to those link, where you don’t want to pass link juice.
An example of Nofollow Link:
<a href=”http://www.google.com/” rel=”nofollow”>Google</a>
Dofollow links allow google (all search engines) to follow them and reach our website. Giving us link juice and a backlink. If a webmaster is linking back to you with this link both Search Engine and Humans will be able to follow you. The best way to give someone do follow love is allowing keyword in the anchor text. This means when you are linking to any website or page, use the targeted keyword as anchor text.
An example of Dofollow Link:
<a href=”http://www.google.com/”>Google</a>
Note: By default all the hyperlinks are dofollow. So, you don’t need to do anything to make a link do-follow.
What are outbound links?
Outbound links, also called external links, direct visitors from pages on your website to other sites on the Internet. Unlike inbound links, which send visitors to other pages on your website, outbound links send visitors to entirely different sites.
If another website links to you, it is considered an outbound link. At the same time, if you link to another website, it is also an outbound link.
Typically, external links pass more value than internal links. This is because search engines believe that what other people say about you is more important than what you say about yourself. In other words, if more websites link to your site, you will appear to be a more credible source.
External links are also harder to manipulate, so they are one of the best ways for search engines to determine the popularity and relevance of a particular website or page.
Keyword Density
Keyword density tells you how often a search term appears in a text in relation to the total number of words it contains. For example: if a keyword appears three times in a 100-word text the keyword density would be 3%. From the point of view of search engines, a high keyword density is a good indicator of search engine spam. If a keyword appears too often in a website, search engines will downgrade the website and it will then appear lower down in search results.
Keyword stemming
Keyword stemming is the procedure for creating new words from the same root word.
Keyword stemming is a word that should be highlighted in the minds of everyone who plan to build a website on their own and look forward to succeeding in the highly competitive area.
ARTICLE - Articles are generally contained explanations, facts, detailed information, analysis and/or visuals (screen captures, illustrations, etc.). Articles shouldn’t be written informally, it can be evaluated on the value of the content as well as on accuracy. Your opinion not allowed. The length of the article can exceed 1400 words. It also includes interviews and research from credible experts and research firms. Spelling and grammar are impeccable. Articles aren’t necessarily addressed directly to the reader.
BLOG - Blogs are tips, general information, or otherwise brief information of any topic and can be written informally regarding the use and implementation of products. Mostly your own opinion can be stuffed in the blog writing. Blogs are generally between 100 and 500 words and include a casual writing style. Blog posts are often addressed directly to the reader and invite audience participation.
Panda
Launched in 2011, Panda was introduced to filter high-quality sites from platforms of lower-quality, in line with Google’s aim of providing users with only the most relevant search results. Panda examines the content that sits on a website, determines whether it is of a certain quality, and ranks it accordingly to its quality criteria.
Elements of a site that may be deemed ‘low quality’ include duplicate content or content that has little value to users (i.e. pages that are too short, do not offer enough information). This algorithm was the first major step that Google took in more accurately returning valuable content to users and filtering out the content of lesser quality.
Penguin
Penguin was released in 2012; its aim is to examine the links used by and to a website (specially focusing on unnatural links – those that may have been purchased, linked to for linking sake; this is what Google considers ‘unethical’ practices).
Getting sites of good authority to link to your platform is great for improving your rankings, as Google will then consider your site as being a source of rich information, in which other reliable sites have put their trust. Buying links, or having a suspiciously high number of links point to your site from low-quality sources, is not looked on too favorably and will probably result in restricted rankings.
Hummingbird
Unlike Panda or Penguin, Hummingbird was not simply an algorithm change; it was a complete rework of Google’s overarching algorithm and indexing methods. It still utilizes Panda and Penguin, but at the time of implementation in August 2013, it completely changed Google’s approach to how websites are ranked.
Hummingbird was introduced to help Google better understand user queries, again, in line with the search giant’s approach to better catering for the needs of individuals looking for particular content. This change seeked to understand what a user might actually mean when certain keywords are used, and return results that Google feels are of most relevance. Content which is deemed to answer these queries, rather than content which simply tries to rank for a specific keyword, is likely to be looked on more favorably by the search engine in this instance.
Fetch as Googlebot: The Fetch as Google tool enables you to test how Google crawls an URL on your site.
Fetches a specified URL in your site and displays the HTTP response. Does not request or run any associated resources (such as images or scripts) on the page. This is a relatively quick operation that you can use to check or debug suspected network connectivity or security issues with your site, and see the success or failure of the request.
Fetch and render: Fetches a specified URL in your site, displays the HTTP response and also renders the page according to a specified platform (desktop or smartphone). This operation requests and runs all resources on the page (such as images and scripts). Use this to detect visual differences between how Googlebot sees your page and how a user sees your page.
Google crawlers (user agents)
Google crawlers (user agents) See which robots Google uses to crawl the web. "Crawler" is a generic term for any program (such as robot or spider) used to automatically discover and scan websites by following links from one webpage to another. Google's main crawler is called Googlebot.
How big can my Sitemap be?
Sitemaps should be no larger than 50MB (52,428,800 bytes) and can contain a maximum of 50,000 URLs. These limits help to ensure that your web server does not get bogged down serving very large files. This means that if your site contains more than 50,000 URLs or your Sitemap is bigger than 50MB, you must create multiple Sitemap files and use a Sitemap index file. You should use a Sitemap index file even if you have a small site but plan on growing beyond 50,000 URLs or a file size of 50MB. A Sitemap index file can include up to 50,000 Sitemaps and must not exceed 50MB (52,428,800 bytes). You can also use gzip to compress your Sitemaps.
NoIndex
The noindex directive is an often used value in a meta tag that can be added to the HTML source code of a webpage to suggest to search engines (most notably Google) to not include that particular page in its list of search results.
By default, a webpage is set to “index.” You should add a
<meta
name="robots" content="noindex" />
directive to a webpage in the <head> section of the HTML if you do not want search engines to crawl a given page and include it in the SERPs (Search Engine Results Pages).
0 Comments